Is action at a distance possible as envisaged by the EPR Paradox.

  • #701
DrChinese & my_wan

How about a request to PF Admin for a new option in PF that would allow us to set a "Footer Disclaimer" (maybe thread specific), that is shown whether the "readers" are logged on or not?

This would probably avoid a lot of unnecessary internal "hubbub"... and be a guarantee for the reader not to get the wrong "impression"...

My "Disclaimer" would look something like this:
I’m a 100% curious layman looking for more knowledge. Naturally, I accept all standards in the scientific community, but I think it’s fun to find (what I imagine) new perspectives and questions (that probably already have been answered). Everything I say can be totally wrong (read at own risk), though I regard myself as perfectly sane - but even this fact could be questioned by some. :wink:

(Realize... this would only work as a "popup function"...)

What do you think?
 
Last edited:
Physics news on Phys.org
  • #702
EPR-Bell Experiment for Dummies
A Reference for the Rest of Us

Found a very informative video which explains all parts in a modern EPR-Bell setup.

https://www.youtube.com/watch?v=<object width="480" height="385"><param name="movie" value="http://www.youtube.com/v/c8J0SNAOXBg&hl=en_US&fs=1&rel=0&color1=0x006699&color2=0x54abd6"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/c8J0SNAOXBg&hl=en_US&fs=1&rel=0&color1=0x006699&color2=0x54abd6" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="480" height="385"></embed></object>

 
Last edited by a moderator:
  • #703
my_wan said:
I will continue to object to BI violations being presented as an overly general "proof", however significant and physically valid experimentally. I object almost as strongly as I would to absolute claims that it must have a realistic explanation.

my_wan, I honestly think you are stretching the meaning of the words a bit (and I am not trying to criticize as I see words to the same effect from others too). Absolute might be a little strong about ANYTHING we think we know. At some point, you have to say: this is proven, this is supported experimentally, or this is a conjecture. Clearly, all sides are NOT equal.

I would say that Bell is proven, local realism is not experimentally supported, and there are conjectures regarding various interpretations. Are any of these absolutes? I think each of us has a slightly different opinion on that and I don't think that is too important. But it would be quite unfair to characterize local realism as being on the same footing as QM in terms of Bell/Bell tests.
 
  • #704
Absolute may be too strong, but when it's said Bell's theorem proves non-locality or non-realism, it is overstated. What has been proven is that nature violates BI. I even go with the extension that it has been irrevocably proven is that nature does not assign properties to things in a manner consistent with that one definition of realism.

By the time I was 10 years old, based on purely mechanistic reasoning, the notion of "physical property" as used in classical physics wrt -fundamental- parts, sounded like an oxymoron to me. When I apply that same reasoning today wrt BI, BI violations only justify my original, age 10, issues with the notion of fundamental properties. Yet to insist that experimental evidence that "fundamental property" wrt to things is an oxymoron proves the lack of realism in things requires assuming the definition wasn't an oxymoron from the start. Before I ever even started kindergarten, I was sneaking rocks in the car to drop out the window, to compare how the path looked from inside and outside the car. I tried using telephone poles and mail boxes as reference points.

DrChinese said:
At some point, you have to say: this is proven, this is supported experimentally, or this is a conjecture. Clearly, all sides are NOT equal.
BI violations are proven beyond ANY reasonable doubt. But no, you can't assume that because the fact of BI violations remain factual proves an interpretation of what it means physically.

DrChinese said:
I would say that Bell is proven, local realism is not experimentally supported, and there are conjectures regarding various interpretations.
Yes, BI violations are factual, and will never go away simply as a result of better experiments. As to what it means wrt realism requires the assumptions that the definition of realism used wasn't predicated on an oxymoron from the start.

If you take a rabbit to have the property 'rabbit, which eats clover with the property 'clover', what happened to the 'clover' property when the rabbit eats it? Does that mean the 'rabbit' property is not 'real'? If no, does that mean the 'rabbit' is not real?

So yes you can say with certainty that BI is valid. You cannot make claims of what it means wrt to realism in general, irrespective of chosen definitions which might even be an oxymoron from the perspective of realism itself, and then claim that the fact that it has remained an oxymoron as defined for some time strengthens the claim realism is falsified. I find it ironic that experimental evidence that my perception at 10, predicated on realism, that 'physical properties' as defined was an oxymoron, is now used to claim realism is falsified.
 
  • #705
How did the original EPR paper actually define realism?
http://www.drchinese.com/David/EPR.pdf

This was the primary completeness condition (unequivocated) which is predicated on realism:
(EPR) [PLAIN]http://www.drchinese.com/David/EPR.pdf said:
Whatever[/PLAIN] the meaning assigned to the term complete, the following requirement for a complete theory seems to be a necessary one: every element of physical reality must have a counterpart in physical theory. We shall call this the condition of completeness.

Now this is far more general than the definition actually used, where it was stated:
(EPR) [PLAIN]http://www.drchinese.com/David/EPR.pdf said:
A[/PLAIN] comprehensive definition is, however, unnecessary for our purposes. We shall be satisfied with the following criterion, which we regard as reasonable.

Notice the equivocations? The following definition was even, in the original paper, disavowed as a complete specification of realism. The following definition was purely utilitarian for purposes of the argument:
(EPR) [PLAIN]http://www.drchinese.com/David/EPR.pdf said:
If,[/PLAIN] without in any way disturbing the system, we can predict with certainty (i.e., with probability equal to one) the value of a physical quantity, then there exist an element of physical reality corresponding to this physical quantity.

Note that "there exist an element of physical reality" is not even a condition that the "physical quantity" associated with it must be singular or innate to singular "elements". This was in a sense the basis on which Einstein rejected Neumann's proof. Deterministic (classical) was taken to mean dispersion free, in which the measurables were taken as distinct preexisting properties of individual "beables". Bell showed that the properties of any such "beables" must also depend on the context of the measurement, much like classical momentum is context dependent. How many different times, not counting the abstract, was this definition equivocated? Let's see:
1) A comprehensive definition is, however, unnecessary for our purposes.
2) It seems to us that this criterion, while far from exhausting all possible ways of recognizing a physical reality, at least provides us with one such way, whenever the conditions set down in it occur.
3) Regarded not as necessary, but merely as sufficient, condition of reality, this criterion is in agreement with classical as well as quantum-mechanical ides of reality.

The point here is that not even the original EPR paper supported the notion that an invalidation of the singular utilitarian definition used was itself an invalidation of reality, or that singular properties represented singular elements. It allowed many more methods and contexts with which to define reality, and merely chose this one to show, given the assumptions, that cases existed where conservation law allowed values lacking a real values in QM could be predicted when QM defined them as fundamentally undefined. To predicate a proof on this singular utilitarian definition as proof that all definitions of objective reality are falsified goes well beyond the claims of the EPR paper. It is also this artificial restriction, to this utilitarian definition provided, that is the weakness in the proof itself.

Look at the rabbit analogy again. Given a rabbit and its diet, the physical quantity of a substance with the property [rabbit poo] is predictable. That by definition, under the utilitarian definition used, defines [rabbit poo] as an element of reality, but does that means the rabbit poo property is also an element of reality. If so, where was the "poo" property before the rabbit eat the clover? If not does mean the "poo" property does not define an element of reality? The only reasonable assumptions are:
1) The [rabbit poo] property in fact represents an element of reality.
2) The [rabbit poo] property is not itself a physical element of reality, but a contextual element of reality representing a real physical state.
3) The rabbit poo itself is a physical element of reality.

Taken this way, BI violations might only indicate that ALL measurable properties have the same contextual dependence as every property we are familiar with in the everyday world. It may only be our notion that fundamental properties of "beables" exist that is at fault. Yet a "beable" lacking measurable properties of its own may still gain properties through persistent, or quasi-persistent, interactions with other beables. A Schneider quote I like a lot from "[URL Determinism Refuted[/URL], illustrating the unobservability of independent variables is fitting here. This entails that what we perceive as the physical world is built from verbs, rather than nouns, but doesn't prove that nouns don't exist to define the verbs. So the claim of a proof of the nonexistence of beables goes well beyond any reasonable level of generality that can be claimed.

The issue of completeness is twofold. If -every- possible empirical observation and prediction is contained within a mathematical formalism, is it complete? I would say so, even if reality contains physical constructs at some level, not defined in the formalism, that provides for the outcomes predicted by the formalism. Einstein insisted on these physical constructs being specified in order to qualify as complete. Funny he didn't insist on the same with his own theories, presumably on the grounds that they didn't conflict with certain realist notions. Thus I don't consider, as Einstein did, that every element of physical reality must have a counterpart in physical theory to be considered complete. If QM is considered lacking in completeness, gravity is the issue. Yet a model, complete in the Einstein sense, would be a useful construct, and maybe even play a pivotal role in unification.
 
Last edited by a moderator:
  • #706
my_wan said:
... but does that means the rabbit poo property is also an element of reality.

If the rabbit poo hits the fan, then I think most would regard 3) as the most plausible alternative. :smile:

Seriously, I’m not quite following all this talk about what is real or not... is a measured photon more real than an unmeasured photon?? Is the measuring apparatus 100% real??

According to Quantum Chromodynamics (QCD) both rabbit poo and measuring apparatus consist of 90% virtual particles, popping in and out all the time:

[URL]http://www.physics.adelaide.edu.au/~dleinweb/VisualQCD/QCDvacuum/su3b600s24t36cool30actionHalf.gif[/URL]

So, what is really real real or counterfactual real or context real, etc !?:confused:!?
 
Last edited by a moderator:
  • #707
my_wan said:
How did the original EPR paper actually define realism?
http://www.drchinese.com/David/EPR.pdf

This was the primary completeness condition (unequivocated) which is predicated on realism:

Now this is far more general than the definition actually used, where it was stated:

Notice the equivocations? The following definition was even, in the original paper, disavowed as a complete specification of realism. The following definition was purely utilitarian for purposes of the argument:

Note that "there exist an element of physical reality" is not even a condition that the "physical quantity" associated with it must be singular or innate to singular "elements". This was in a sense the basis on which Einstein rejected Neumann's proof. Deterministic (classical) was taken to mean dispersion free, in which the measurables were taken as distinct preexisting properties of individual "beables". Bell showed that the properties of any such "beables" must also depend on the context of the measurement, much like classical momentum is context dependent. How many different times, not counting the abstract, was this definition equivocated? Let's see:
1) A comprehensive definition is, however, unnecessary for our purposes.
2) It seems to us that this criterion, while far from exhausting all possible ways of recognizing a physical reality, at least provides us with one such way, whenever the conditions set down in it occur.
3) Regarded not as necessary, but merely as sufficient, condition of reality, this criterion is in agreement with classical as well as quantum-mechanical ides of reality.

The point here is that not even the original EPR paper supported the notion that an invalidation of the singular utilitarian definition used was itself an invalidation of reality, or that singular properties represented singular elements. It allowed many more methods and contexts with which to define reality, and merely chose this one to show, given the assumptions, that cases existed where conservation law allowed values lacking a real values in QM could be predicted when QM defined them as fundamentally undefined. To predicate a proof on this singular utilitarian definition as proof that all definitions of objective reality are falsified goes well beyond the claims of the EPR paper. It is also this artificial restriction, to this utilitarian definition provided, that is the weakness in the proof itself.

I do agree with much of what you are saying here. There are definitely utilitarian elements to Bell's approach. But I may interpret this in a slightly different way than you do. In my mind, Bell says to the effect: "Define realism however you like, and I would still expect you to arrive at the same place." I think he took it for granted that the reader might object to any particular definition as somewhat too lenient or alternately too restrictive. But that one's substitution of a different definition would do little to alter the outcome.

Again, for those following the discussion, I would state as follows: EPR defined elements of reality as being able to predict the result of an experiment without first disturbing the particle. They believed that there were elements of reality for simultaneous measurement settings a and b. Bell hypothesized that there should, by the EPR definition, be also a simultaneous c. This does not exist as part of the QM formalism, and is generally disavowed as part of most treatments. So it is a requirement of the realistic school, i.e. the school of thought that says that hidden variables exist. But not an element of QM.
 
  • #708
my_wan said:
Look at the rabbit analogy again. Given a rabbit and its diet, the physical quantity of a substance with the property [rabbit poo] is predictable. That by definition, under the utilitarian definition used, defines [rabbit poo] as an element of reality, but does that means the rabbit poo property is also an element of reality. If so, where was the "poo" property before the rabbit eat the clover? If not does mean the "poo" property does not define an element of reality? The only reasonable assumptions are:
1) The [rabbit poo] property in fact represents an element of reality.
2) The [rabbit poo] property is not itself a physical element of reality, but a contextual element of reality representing a real physical state.
3) The rabbit poo itself is a physical element of reality.

The EPR view was that there was an element of reality associated with the ability to predict an outcome with certainty. There was no claim the what was measured was itself "real", as it was understood that it might be a composite or derived quantity. Is temperature real?

But the EPR view was also that the element of reality is non-contextual. They said that any other view was "unreasonable".
 
  • #709
(my_wan, sorry for the silly rabbit joke... parrots & rabbits + EPR seems to short circuit my brain...)


I’m going to stick my layman nose out, for any to flatten.

To my understanding, Einstein didn’t like the idea that nature was uncertain according to QM. That was the main problem – not if A & B was "real" or not.

Einstein formulated the EPR paradox to show that there was a possibility to get 'complete' information about a QM particle, like momentum and position, by measuring one of the properties on a twin particle, without disturbing the 'original'.

One cornerstone in QM is the Heisenberg uncertainty principle, which says it’s impossible to get 'complete' information about a QM particle (like momentum and position), not because the lack of proper equipment – but because uncertainty and randomness is a fundamental part of nature.

Einstein raised the bet and placed his own special theory of relativity at stake (probably certain it couldn’t fail) stating – either local hidden variables exist, or spooky action at a distance is a requirement – to explain what happens in the EPR paradox.

Einstein didn’t know that his own argument would boomerang back on him...

And here we are today with a theoretical proven and physical (99,98%) theory stating that the QM world is non-local, in Bell’s theorem.

This means, beyond any doubt, that GR <> QM and to solve this dilemma we need to get GR = QM.

So gentlemen, why all this 'fuss' about reality, counterfactuals, context, C, etc?
 
Last edited:
  • #710
DrChinese said:
The EPR view was that there was an element of reality associated with the ability to predict an outcome with certainty. There was no claim the what was measured was itself "real", as it was understood that it might be a composite or derived quantity. Is temperature real?
Yes, exactly. But the issue is what BI has to assume about the contextually of the measured values wrt elements of reality. EPR needed only the fact it was predictable, and no other assumption. I have to object to your next claim.

DrChinese said:
But the EPR view was also that the element of reality is non-contextual. They said that any other view was "unreasonable".
No. EPR did not assume reality is non-contextual. The "unreasonable" quote only denied a singular form of contextuality, i.e., that the reality of measurement P was dependent on measurement Q. That is certainly far from the only form of contextuality that exist, and the interpretation the BI demonstrates this form denies any other form of contextuality, and presumes correlation equals causation. It would certainly be "unreasonable" to conclude that classical physics does not allow correlations without the defining the measurement itself as the causative agent of the correlation.

Consider what it entails if we assume a realist perspective of BI violations.
1) Correlations at common detector settings is a physical certainty.
2) Offsets from a common detector setting introduces noise, completely random from an experimental/empirical perspective.

Now, via BI violations, counterfactually we can show the randomness of the noise in 2) cannot show the same randomness wrt another detector setting. Well big shocker when arbitrary but common detector settings doesn't show any significant randomness. If this noise itself is -fundamentally- deterministic but unpredictable, then you can always choose an after the fact measurement you could have done that would have given a different value than the expectation value of this randomness. The same for any random series of predefined heads/tails can be chosen after the fact to show a non-random correlation with a set of coin tosses.

To illustrate, note how in the negative probability proof the non-correlations, (Y = SIN^2(45 degrees), are given the same ontological certainty status as the correlations at common angles. Certainly, from a purely statistical standpoint, the noise of 2) is a certainty in the limit. Yet if you assume a realist position, you can always choose an after the fact condition in which noise becomes a signal, or visa versa. I can win the lottery every time if I can choose after the fact.

Of course, a good rebuttal is, the problem in BI violations is that BI violations are always inconsistent with what an alternative measurement would have indicated. The problem here is that the randomness of the noise in 2) is given the same ontological status as the certainty of 1). When you define a counterfactual channel, you are by definition imposing a non-random after the fact condition on C. The noise of the counterfactual channel is predefined to be non-random wrt any performable actual experiment, for either leg A or B, since it is after the fact correlated and anti-correlated respectively. This entails that the noise is -predefined- to be inversly related to the randomness of any actual measurement of A and B, thus the randomness of Y = SIN^2(45 degrees) is defined out of it after the fact. Like calling heads after the toss. The stochastic noise can't be considered to have the same ontological certainty status as the certainty of the physical correlation itself, which exist even when the noise introduced by offsets shows noncorrelated measurements.

I still think the Born rule is probably directly involved here, which by itself would give realist a headache. :-p I haven't had time to test my rotationally variant vectorial ideas yet either. I'll get to it sooner or later.
 
  • #711
DevilsAvocado said:
This means, beyond any doubt, that GR <> QM and to solve this dilemma we need to get GR = QM.

So gentlemen, why all this 'fuss' about reality, counterfactuals, context, C, etc?

Because precisely what we can presume about reality, counterfactuals, context, etc., plays a large role in what we can consider to get GR <> QM to GR = QM. Short of doing that, I don't see the value of purely interpretive models.

And your rabbit poo joke was fine :smile:
 
  • #712
my_wan said:
I still think the Born rule is probably directly involved here, which by itself would give realist a headache.
Why do you think that? Doesn't the Born rule have an empirical basis?
 
  • #713
DrChinese said:
But the EPR view was also that the element of reality is non-contextual. They said that any other view was "unreasonable".
The EPR view was that the element of reality at B determined by a measurement at A wasn't reasonably explained by spooky action at a distance -- but rather that it was reasonably explained by deductive logic, given the applicable conservation laws.

That is, given a situation in which two disturbances are emitted via a common source and subsequently measured, or a situation in which two disturbances interact and are subsequently measured, then the subsequent measurement of one will allow deductions regarding certain motional properties of the other.

Do you doubt that this is the view of virtually all physicists?

Do you see anything wrong with this view?
 
  • #714
my_wan said:
Let's get inequality violations without correlation in a single PBS:
Let's assume a perfect detection efficiency in a single channel, 100% of all particles sent in this channel get detected either go left or right using a PBS. Consider a set of detections at this PBS at angle 0. 50% go left and 50% right. Now if you ask which left would have went right, and visa versa, these photons would have went at an angle setting of 22.5, it's reasonable to say ~15% that would have went left go right, and visa versa.
This is only true if the source produces only H and V photons.
You can easily check it with such setup. Let's say single run of experiment lasts 10 seconds. Our photon source produces H polarized photons for first 5 seconds of experiment and V polarized photons for other 5 seconds.
Say for first 5 seconds all photons appear in PBS channel #1 and for other 5 seconds all photons appear in channel #2. When we rotate PBS by 22.5° we have:
85% photons in channel #1 and 15% photons in #2 for first half and
15% photons in channel #1 and 85% photons in #2 for second half.
So it is indeed reasonable to assume that 15% of photons changed their channel.

However if source produces +45° and -45° polarized photons we will have different picture.
For PBS at 0° we have:
50% photons in channel #1 and 50% photons in #2 for first half and
50% photons in channel #1 and 50% photons in #2 for second half.
For PBS at 22.5° we have:
85% photons in channel #1 and 15% photons in #2 for first half and
15% photons in channel #1 and 85% photons in #2 for second half.
So it is reasonable to assume that 35% of photons changed their channel.

my_wan said:
Yet this same assumption indicates that at an angle of 45, ~50% that would have gone left go right (relative to the 0 angle), and visa versa. Yet, relative to angle 22.5, the 45 angle can only have switched ~15% of the photon detection rates. 15% + 15% = 30%, not 50%.
Above explanation indicate that you don't get the problem you are stating here.
 
  • #715
DrChinese said:
GHZ tests are not considered to rely on the Fair Sampling assumption.
In original GHZ paper "Bell's theorem without inequalities" (it is pay per view unfortunately) it is said:
"The second step is to show the test could be done even with low-efficiency detectors, provided that we make a plausible auxiliary assumption, which we call fair sampling. Finally, we show that the auxiliary assumption is dispensable if detector efficiencies exceed 90.8%."

DrChinese said:
Now there is a kicker on this that may confuse folks. It is true that only a sample is used, so you might think the Fair Sampling issue is present. But it is not. The sample looks like this:

-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1...

Where local realism predicts

1
1
1
1
1
1
1
1
1
1
1...

A little consideration will tell you that local realism is falsified in this case. Every case, individually, is a falsification.
GHZ experiments use four photons not one photon.
If we talk about three photon GHZ then it's results are acquired using four different modifications of setup. And GHZ inequalities are calculated from all four results together that each consists of three-fold coincidences in four detectors.
Nothing of this indicates that you can simplify experimental outcome the way you did.
 
  • #716
zonde said:
my_wan said:
Let's get inequality violations without correlation in a single PBS:
Let's assume a perfect detection efficiency in a single channel, 100% of all particles sent in this channel get detected either go left or right using a PBS. Consider a set of detections at this PBS at angle 0. 50% go left and 50% right. Now if you ask which left would have went right, and visa versa, these photons would have went at an angle setting of 22.5, it's reasonable to say ~15% that would have went left go right, and visa versa.

This is only true if the source produces only H and V photons.
Absolutely not. The statistics, as stated, are in fact predicated on completely randomized polarizations coming from the source. However, if the photons coming from the source were 50% H and V, strictly at those 2 polizations, it would have the same statistical effect, because the rate photons switch paths from H is the same rate they would switch from V in reverse.

But the fact remains, purely randomed polarization would have the same statistics. I went to great lengths to verify this assumption.

zonde said:
You can easily check it with such setup. Let's say single run of experiment lasts 10 seconds. Our photon source produces H polarized photons for first 5 seconds of experiment and V polarized photons for other 5 seconds.
Say for first 5 seconds all photons appear in PBS channel #1 and for other 5 seconds all photons appear in channel #2. When we rotate PBS by 22.5° we have:
85% photons in channel #1 and 15% photons in #2 for first half and
15% photons in channel #1 and 85% photons in #2 for second half.
So it is indeed reasonable to assume that 15% of photons changed their channel.
Yep, but this is quiet different from random polarizations, where any setting of the PBS sends 50% in each direction, but also incidentally matches, at all PBS settings, the statistics as two pure polarizations at 90 degree offsets.

Consider this: Add the first and second set of 5 second runs together and the 85% and 15% wash out, just like what you initially specified above. Now check and see that the same thing happens at all settings. Thus if it always washed out at all settings in the strict H and V cases, why would a completely randomized source, which only changes those same settings via the photons rather than polarizer settings lead to anything different in the overall statistics?

zonde said:
However if source produces +45° and -45° polarized photons we will have different picture.
For PBS at 0° we have:
50% photons in channel #1 and 50% photons in #2 for first half and
50% photons in channel #1 and 50% photons in #2 for second half.
For PBS at 22.5° we have:
85% photons in channel #1 and 15% photons in #2 for first half and
15% photons in channel #1 and 85% photons in #2 for second half.
So it is reasonable to assume that 35% of photons changed their channel.
Yep, but only if the photon from the source are not randomized, which you falsely assumed my description didn't do, simply because the overall statistics happen to match for both pure H and V and randomized photon polarization cases.[/QUOTE]

zonde said:
my_wan said:
Yet this same assumption indicates that at an angle of 45, ~50% that would have gone left go right (relative to the 0 angle), and visa versa. Yet, relative to angle 22.5, the 45 angle can only have switched ~15% of the photon detection rates. 15% + 15% = 30%, not 50%.
Above explanation indicate that you don't get the problem you are stating here.
The only mistake I see in your reasoning is thinking that because there is a statistical match between the pure H and V case, and the randomized case, I must have only have been referring the pure H and V case. This is wrong. Check the same statistics for the randomized case and you'll see a statistical match for both cases, but the randomized case would invalidate your +45° and -45° case, because I was assuming the randomized case.
 
  • #717
DrChinese said:
Hey, I hope you know I am glad you are here.
Was there something in my prior post in this thread that indicated that I think that you're not glad that I'm here? (Please don't misunderstand the 'tone' of any of my posts. A day without you at PF would be like a day without ... sunshine. However, while I do like the fact that the sun is shining, it doesn't contradict the fact of shade. This is just elementary optics which both you and Bell seem to be avoiding in your interpretations of Bell's theorem.)

I quote you from a previous post:
DrChinese said:
You shouldn't be able to have this level of correlation if locality and realism apply.
This betrays an apparent lack of understanding of elementary optics. Which, by the way, also applies in qm.

DrChinese said:
I hope nothing I say discourages you in any way. In fact, I encourage you to challenge from every angle. I enjoy a lot of your ideas and they keep me on my toes.
Then, when I, or someone else, offers a, purported, LR model of entanglement that reproduces the qm predictions, why not look at it closely and express exactly why you think it is or isn't an LR model of entanglement?

DrChinese said:
I think you know that there are a lot of readers who are not active posters in many of our discussions. Just look at the view count on these threads. While I know what is what throughout the thread, these readers may not. That is why I frequently add comments to the effect of "not generally accepted", "show peer reviewed reference" , etc. my_wan and billschnieder get that too. So my objective is to keep casual readers informed so that they can learn both the "standard" (generally accepted) and the "non-standard" (minority) views. I would encourage any reader to listen and learn to a broad spectrum of ideas, but obviously the mainstream should be where we start. And that is what PhysicsForums follows as policy as well.
My approach to understanding Bell's theorem isn't a 'nonstandard' or 'minority' approach. To characterize it as such does a disservice to me and misinforms less sophisticated posters. What you are stating, sometimes, as the mainstream view is, I think, incorrect, and also not the mainstream view.

There's a very important difference between:
1. No physical theory of local Hidden Variables can ever reproduce all of the predictions of Quantum Mechanics.
and:
2. No Local Realistic physical theory can ever reproduce all of the predictions of Quantum Mechanics.

We KNOW that 2. is incorrect, because viable LR models of entanglement exist, and they remain unrefuted. If you refuse to acknowledge them, then so what. They exist nonetheless.

I want readers of this thread to understand this. There are LR theories of entanglement which reproduce all of the predictions of qm. They're in the preprint archives at arxiv.org, and there are some that have even been published in peer reviewed journals. Period. If you, DrChinese, want to dispute this, then it's incumbent on you, or anyone who disputes these claims, to analyze the theories in question and refute their claims regarding locality or realism or compatibility with qm. If this isn't done, then the claims stand unrefuted. And, since no such refutations exist, then the current status of LR theories which reproduce all qm predictions is that they remain unrefuted.

If you don't want to inform casual readers of this thread of this fact, then fine. I've informed them.

And just so there's no confusion about this, let me say it again. Bell's theorem does not rule out local realistic theories of entanglement. If DrChinese disagrees with this, then I want you, the casual reader of this thread, to demand that DrChinese analyze a purported LR theory and show that it either isn't local or realistic or both or that it doesn't reproduce qm predictions.

DrChinese said:
On the other, when posters suitably label items then that is not an issue and I don't feel compelled to add my (sometimes snippy) comments. Also, many times a personal opinion can be converted to a question so as not to express an opinion that can be mistrued. For example: "Is it possible that Bell might not have considered the possibility of X?". That statement - er question - does not attempt to contradict Bell per se. And then the discussion can continue.
And what you often don't do in many of your statements is to qualify exactly what you're saying. So, bottom line, your statements often perpetuate the myth that Bell's theorem informs us about facts of nature -- rather than facts of what sorts of theoretical forms are compatible with certain experimental situations.

DrChinese said:
And less feelings get hurt. And people won't think I am resorting to authority as a substitute for a more convincing argument. As I often say, it only takes one. Of course, me being me, that line is stolen (in mangled form) from a man who is quite well known. In fact, maybe it is time to add something new to my tag line...
There are, at least, a dozen different LR models of entanglement in the literature which reproduce the qm predictions. Of course, if you won't look at any of them then 10^1000 wouldn't be enough. Would it?

All you have to do is look at one. If you think it doesn't qualify as a local or a realistic model, then you can point out why (but don't require that it produce incorrect predictions, because that's just silly). If you're unwilling to do that, then your Einstein quote is just fluffy fuzziness wrt your position on LR models of entanglement.

I want you to refute an LR theory of entanglement that I present. You've been called out. Will you accept the challenge?

By the way, I like the Korzybski quote.

www.DrChinese.com "The map is not the territory." - Korzybski.

"Why 100? If I were wrong, one would have been enough." - Albert Einstein, when told of publication of the book One Hundred Authors Against Einstein.
 
Last edited:
  • #718
ThomasT has a point wrt mainstream view on the realism issue. I know very few that take as hard a view on realism as DrC. Rather an acceptance the uncertainty in any particular interpretation. Of course my personal experience is limited. However, a review of published opinions is not necessarily indicative of the general opinion. Like the myth that violence is increasing, when in fact it's been steadily dropping year to year for many generations. I would be curious what the actual numbers look like.

So even though BI might specify the status quo of the argument, it's likely much more suspect to claim the standard interpretation represents the predominate view.

DrChinese said:
The sample looks like this:

-1
-1
-1
-1...

Where local realism predicts

1
1
1
1...

A little consideration will tell you that local realism is falsified in this case.
Only with a very restricted notion of realism and what it entails can this be said. I also never got a response to my objection to calling realistic ways a defining contextualization of such variables a Fair Sampling argument.

I would love to hear a definition of contextual variables? Certain statements made it sound like contextual variables, by definition, meant non-realistic. I never got a response to the questions, is velocity a contextual variable?

I also never got an objection when I pointed out that straight forward squaring of any vector leads to values that are unavoidably coordinate dependent, that is it produces different answers and not just the same answer defined by a different coordinate system. Yet the requirement that a realistic model must model arbitrary detector settings, rather than arbitrary offsets, requires a coordinate independent square of a vector.

To say realism is falsified most certainly is an overreach of what can be ascertained from the facts. I don't care who is right, I want a clearer picture of the mechanism, locally realistic or not.
 
  • #719
I must inform the casual reader: Don’t believe everything you read at PF, especially if the poster defines you as "less sophisticated".

Everything is very simple: If you have one peer reviewed theory (without references or link) stating that 2 + 2 = 5 and a generally accepted and mathematical proven theorem stating 2 + 2 = 4, then one of them must be false.

And remember: Bell’s theorem has absolutely nothing to do with "elementary optics" or any other "optics", I repeat – absolutely nothing. Period.
 
  • #720
my_wan said:
Absolutely not. The statistics, as stated, are in fact predicated on completely randomized polarizations coming from the source. However, if the photons coming from the source were 50% H and V, strictly at those 2 polizations, it would have the same statistical effect, because the rate photons switch paths from H is the same rate they would switch from V in reverse.

But the fact remains, purely randomed polarization would have the same statistics. I went to great lengths to verify this assumption.


Yep, but this is quiet different from random polarizations, where any setting of the PBS sends 50% in each direction, but also incidentally matches, at all PBS settings, the statistics as two pure polarizations at 90 degree offsets.

Consider this: Add the first and second set of 5 second runs together and the 85% and 15% wash out, just like what you initially specified above. Now check and see that the same thing happens at all settings. Thus if it always washed out at all settings in the strict H and V cases, why would a completely randomized source, which only changes those same settings via the photons rather than polarizer settings lead to anything different in the overall statistics?

Yep, but only if the photon from the source are not randomized, which you falsely assumed my description didn't do, simply because the overall statistics happen to match for both pure H and V and randomized photon polarization cases.


The only mistake I see in your reasoning is thinking that because there is a statistical match between the pure H and V case, and the randomized case, I must have only have been referring the pure H and V case. This is wrong. Check the same statistics for the randomized case and you'll see a statistical match for both cases, but the randomized case would invalidate your +45° and -45° case, because I was assuming the randomized case.
Hmm, you think that I am questioning 50%/50% statistics?
I don't do that. I am questioning your statement that "it's reasonable to say ~15% that would have went left go right, and visa versa."
That is not reasonable or alternatively it is reasonable only if you assume that you have source with even mixture of H and V photons.
If you have source that consists of even mixture of photons with any polarization then reasonable assumption is that ~25% changed their channel.
 
  • #721
ThomasT said:
...2. No Local Realistic physical theory can ever reproduce all of the predictions of Quantum Mechanics.

We KNOW that 2. is incorrect, because viable LR models of entanglement exist, and they remain unrefuted. If you refuse to acknowledge them, then so what. They exist nonetheless.

I want readers of this thread to understand this. There are LR theories of entanglement which reproduce all of the predictions of qm. They're in the preprint archives at arxiv.org, and there are some that have even been published in peer reviewed journals. Period. If you, DrChinese, want to dispute this, then it's incumbent on you, or anyone who disputes these claims, to analyze the theories in question and refute their claims regarding locality or realism or compatibility with qm.

If you don't want to inform casual readers of this thread of this fact, then fine. I've informed them.

And just so there's no confusion about this, let me say it again. Bell's theorem does not rule out local realistic theories of entanglement. If DrChinese disagrees with this, then I want you, the casual reader of this thread, to demand that DrChinese analyze a purported LR theory and show that it either isn't local or realistic or both or that it doesn't reproduce qm predictions.
...

There are, at least, a dozen different LR models of entanglement in the literature which reproduce the qm predictions. Of course, if you won't look at any of them then 10^1000 wouldn't be enough. Would it?

All you have to do is look at one. If you think it doesn't qualify as a local or a realistic model, then you can point out why (but don't require that it produce incorrect predictions, because that's just silly). If you're unwilling to do that, then your Einstein quote is just fluffy fuzziness wrt your position on LR models of entanglement.

I want you to refute an LR theory of entanglement that I present. You've been called out. Will you accept the challenge?

I have a requirement that is the same requirement as any other scientist: provide a local realistic theory that can provide data values for 3 simultaneous settings (i.e. fulfilling the realism requirement). The only model that does this that I am aware of is the simulation model of De Raedt et al. There are no others to consider. There are, as you say, a number of other *CLAIMED* models yet none of these fulfill the realism requirement. Therefore, I will not look at them.

Perhaps you will show me where any of the top scientific teams have written something to the effect of "local realism is tenable after Bell". Because all of the teams I know about state the diametric opposite. Here is Zeilinger (1999) in a typical quote of his perspective:

"Second, a most important development was due to John Bell (1964) who continued the EPR line of reasoning and demonstrated that a contradiction arises between the EPR assumptions and quantum physics. The most essential assumptions are realism and locality. This contradiction is called Bell’s theorem."

I would hope you would recognize the above as nearly identical to my line of reasoning. So if you know of any hypothesis that contradicts the above AND yields a local realistic dataset, please give a link and I will give you my thoughts. But I cannot critic that which does not exist. (Again, an exception for the De Raedt model which has a different set of issues entirely.)
 
  • #722
my_wan said:
Because precisely what we can presume about reality, counterfactuals, context, etc., plays a large role in what we can consider to get GR <> QM to GR = QM.


Maybe you’re right. Personally, I think semantic discussions on "reality" could keep you occupied for a thousand years, without substantial progress. What if Einstein presented something like this:

"The causal reality for the joint probabilities of E having a relation to M, in respect of the ideal context, is strongly correlated to C."

Except for the very fine "sophistication" – could this be of any real use?

Maybe I’m wrong, and Einstein indeed used this very method to get to:

E = mc2

... I don’t know ...

But wrt "reality", I think we have a very real problem, in that the discordance for aligned parallels is 0:

N(0°, 0°) = 0​

If we then turn one minus thirty degrees and the other plus thirty degrees, from a classical point of view we should get:

N(+30°, -30°) ≤ N(+30°, 0°) + N(0°, -30°)​

Meaning that the discordance when both are turned cannot be greater than the sum of the two turned separately, which is very logical and natural.

But this is NOT true according to quantum mechanical predictions and experiments!

Even a high school freshman can understand this problem, you don’t have to be "sophisticated" or "intellectual superior", that’s just BS.

Now, to start long die-hard discussions on "elementary optics" to get an illusion of a probable solution is not very bright, not even "sophisticated".

I think most here realize that attacking the mathematics as such cannot be considered "healthy".

To discuss what’s real or not maybe could lead to "something", but it will never change the mathematical reality.

Therefore, the only plausible way 'forward' is to find a 'flaw' in QM, which will be very very hard since QM is the most precise scientific theory we got. That potential 'flaw' in QM has to be mathematical, not semantical – words won’t change anything about the EPR paradox and the mathematical predictions of QM.

I think it’s very interesting with your attempts to get a 'classical' explanation for what happens in EPR-Bell experiments, but how is this ever going to change the real mathematical truth, which we both know is true?
 
  • #723
my_wan said:
1. ThomasT has a point wrt mainstream view on the realism issue. I know very few that take as hard a view on realism as DrC.

2. Only with a very restricted notion of realism and what it entails can this be said. I also never got a response to my objection to calling realistic ways a defining contextualization of such variables a Fair Sampling argument.

3. I would love to hear a definition of contextual variables? Certain statements made it sound like contextual variables, by definition, meant non-realistic. I never got a response to the questions, is velocity a contextual variable?

4. I also never got an objection when I pointed out that straight forward squaring of any vector leads to values that are unavoidably coordinate dependent, that is it produces different answers and not just the same answer defined by a different coordinate system. Yet the requirement that a realistic model must model arbitrary detector settings, rather than arbitrary offsets, requires a coordinate independent square of a vector.

5. To say realism is falsified most certainly is an overreach of what can be ascertained from the facts. I don't care who is right, I want a clearer picture of the mechanism, locally realistic or not.

In trying to be complete in my response so you won't think I'm avoiding anything:

1. ThomasT is refuted in a separate post in which I provided a quote from Zeilinger. I can provide a similar quote from nearly any major researcher in the field. And all of them use language which is nearly identical to my own (since I closely copy them). So YES, the generally accepted view does use language like I do.

2. GHZ is very specific. It is a complex argument, but uses the very same definition of reality as does Bell. And this yields a DIFFERENT prediction in every case from QM, not just in a statistical ensemble. So NO, your conclusion is incorrect.

3. A contextual variable is one in which the nature of the observation is part of the equation for predicting the results. Thus it does not respect observer independence. You will see that in your single particle polarizer example, observer dependence appears to be a factor in explaining the results. Keep in mind, contextuality is not an assumption of Bell.

4. Your argument here does not follow regarding vectors. So what if it is or is not true? This has nothing to do with a proof of QM over local realism. I think you are trying to say that if vectors don't commute, then by definition local realism is ruled out. OK, then local realism is ruled out which is what I am asserting anyway. But that result is not generally accepted as true and so I just don't follow. How am I supposed to make your argument for you?

5. I know of no one who can provide ANY decent picture, and I certainly can't help. There are multiple interpretations, take your pick.
 
  • #724
my_wan said:
I don't care who is right, I want a clearer picture of the mechanism, locally realistic or not.
I’m with you on this one 1000%. This is what we should discuss, not "elementary optics".

I think that it’s overlooked in this thread that this was a major problem for John Bell as well (and I’m going to prove this statement in a few days).

Bell knew that his theorem creates a strong contradiction between QM & SR, one or both must be more or less wrong. Then if QM is more or less wrong, it could mean that Bell’s theorem is also more or less wrong, since it builds its argument on QM predictions.

DrChinese said:
5. I know of no one who can provide ANY decent picture, and I certainly can't help. There are multiple interpretations, take your pick.

Don’t you think that interpretations are a little too easy way out of this?? I don’t think John Bell would have agreed with you here...
 
  • #725
DevilsAvocado said:
Bell knew that his theorem creates a strong contradiction between QM & SR, one or both must be more or less wrong. Then if QM is more or less wrong, it could mean that Bell’s theorem is also more or less wrong, since it builds its argument on QM predictions.

Don’t you think that interpretations are a little too easy way out of this?? I don’t think John Bell would have agreed with you here...

Bell shifted a bit on interpretations. I think the majority view is that he supported a Bohmian perspective, but I am not sure he came down fully in anyone interpretation. At any rate, I really don't know what we can say about underlying physical mechanisms. We just don't know how nature manages to implement what we call the formalism.

And don't forget that Bell does not require QM to be correct, just that the QM predictions are incompatible with LR predictions. Of course, Bell tests confirm QM to many SD.
 
  • #726
DrChinese said:
... QM predictions are incompatible with LR predictions.

Yes you are right, and this is what causes the dilemma. The Einsteinian argument fails:

no action on a distance (polarisers parallel) ⇒ determinism

determinism (polarisers nonparallel) ⇒ action on a distance

Meaning QM <> SR.
 
  • #727
zonde said:
Hmm, you think that I am questioning 50%/50% statistics?
I don't do that.
No. I understood what you asserted.

zonde said:
I am questioning your statement that "it's reasonable to say ~15% that would have went left go right, and visa versa."
Yes, I seen that. The pure case is in fact what I used to empirically verify the assumption.

zonde said:
That is not reasonable or alternatively it is reasonable only if you assume that you have source with even mixture of H and V photons.
And this is where you go wrong again. I stand by my factual statement (not assumption) that randomized photon polarizations will have the same route switching statistics as an even mixture of pure H and V polarizations. I verified it both mathematically and in computer simulations.

Consider, in the pure case where you got it right, where you move a detector setting from 0 to 22.5 degrees. The route switching statistics look like cos^2(22.5) = sin^2(67.5), thus you are correct about the pure polarization pairs at 90 degree offsets. Now notice that cos^2(theta) = sin^2(theta +- 90) for ANY arbitrary theta. Now add a second pair of pure H and V photons polarizations that is offset 45 degrees from the first pair. Now at a 0 angle detector setting you've added 50% more photons to be detected from the new H and 50% from the new V polarization beams. Since cos^2(theta) = sin^2(theta +- 90) in ALL cases the overall statistics have not changed. To add more pure beam pairs without changing overall statistics, you have to add 2 pair of pure H and V beams at both 22.5 and 67.5 degree offsets. To add more pure beam sets, without changing overall statistics, requires 4 more H and V pure beams offset equidistant from those 4. Next step requires 8 to maintain the same statistics, and simply take the limit. You then end up with a completely randomized set of photons polarization that exhibit the exact same path switching statistics as the pure H and V case, because cos^2(theta) = sin^2(theta +- 90) for absolutely ALL values of theta.

So if you still don't believe it, show me. If you want a computer program that uses a random number generator, to generate randomly polarized photons and send them to a virtual detector, ask. I can write the program pretty quick. You'll need AutoIt (freeware, not nagware) if you don't want to be sent an exe. With AutotIt installed, you can run the script directly without compiling it.

zonde said:
If you have source that consists of even mixture of photons with any polarization then reasonable assumption is that ~25% changed their channel.
False, and false is not an assumption, it is a demonstrable fact. So long as the pure randomization case exhibits those statistics, physically so must the completely randomized case. This fact is central to EPR modeling attempts. If you can demonstrate otherwise, I'll add a sig line to my profile stating that and linking to where you made a fool of me.
 
  • #728
zonde said:
Hmm, you think that I am questioning 50%/50% statistics?
I don't do that. I am questioning your statement that "it's reasonable to say ~15% that would have went left go right, and visa versa."
That is not reasonable or alternatively it is reasonable only if you assume that you have source with even mixture of H and V photons.
If you have source that consists of even mixture of photons with any polarization then reasonable assumption is that ~25% changed their channel.
I also just notice you contradicted yourself. You say:
1) ...it is reasonable only if you assume that you have source with even mixture of H and V photons.
2) If you have source that consists of even mixture of photons with any polarization then reasonable assumption is that ~25% changed their channel.

But a random distribution is an "even mixture of H and V" as defined by 1), just not all on the same 2 axis. For a random distribution, there statistically exist both an opposite and perpendicular case for every possible polarization instance.
 
  • #729
DrChinese said:
In trying to be complete in my response so you won't think I'm avoiding anything:

1. ThomasT is refuted in a separate post in which I provided a quote from Zeilinger. I can provide a similar quote from nearly any major researcher in the field. And all of them use language which is nearly identical to my own (since I closely copy them). So YES, the generally accepted view does use language like I do.
Don't have much to refute this with. I've read the arguments and counterarguments, I was more curious about the general opinion among physicist, with published positions on EPR or not.

DrChinese said:
2. GHZ is very specific. It is a complex argument, but uses the very same definition of reality as does Bell. And this yields a DIFFERENT prediction in every case from QM, not just in a statistical ensemble. So NO, your conclusion is incorrect.
The question was the reasoning behind labeling any specific form of contextualization of contextual variables a Fair Sampling argument. I'm not even sure what this response has to do with the issue as stated. Though I have previously expressed confusion how you defined precisely what did or didn't qualify as realism even with that definition. Merely restating the definition doesn't help much. Nor does it indicate whether realistic models can exist that doesn't respect that definition.

DrChinese said:
3. A contextual variable is one in which the nature of the observation is part of the equation for predicting the results. Thus it does not respect observer independence. You will see that in your single particle polarizer example, observer dependence appears to be a factor in explaining the results. Keep in mind, contextuality is not an assumption of Bell.
Nice definition, I'll keep that for future reference. I'm well aware that my single polarizer example contains contextual dependencies, yet empirically valid consequences. It was the fact that the contextual values didn't depend on any correlations to anything that was important to the argument. Thus it was limited to refuting a non-local claim, not a realism claim. What it indicates is that a classical mechanism for the nonlinear path switching of uncorrelated photon responses to a single polarizer is required to fully justify a realistic model. I even give the opinion that a mechanistic explanation of the Born rule might be required to pull this off. Some would be happy to just accept the empirical mechanism itself as a local classical optics effect and go from there. I'm not. I'm aware contextuality was not an assumption of Bell. Hence the requirement of some form of classical contextuality to escape the stated consequences of his inequality.

DrChinese said:
4. Your argument here does not follow regarding vectors. So what if it is or is not true? This has nothing to do with a proof of QM over local realism. I think you are trying to say that if vectors don't commute, then by definition local realism is ruled out. OK, then local realism is ruled out which is what I am asserting anyway. But that result is not generally accepted as true and so I just don't follow. How am I supposed to make your argument for you?
1) You say: "Your argument here does not follow regarding vectors. So what if it is or is not true?", but the claim about this aspect of vectors is factually true. Read this carefully:
http://www.vias.org/physics/bk1_09_05.html
Note: Multiplying vectors from a pool ball collision under 2 different coordinate systems don't just lead to the same answer expressed in a different coordinate system, but an entirely different answer altogether. For this reason such vector operations are generally avoided, using scalar multiplication instead. Yet the Born rule and cos^2(theta) do just that.
2) You say: "I think you are trying to say that if vectors don't commute, then by definition local realism is ruled out.", but they don't commute for pool balls either, when used this way. That doesn't make pool balls not real. Thus the formalism has issues in this respect, not the reality of the pool balls. I even explained why: because given only the product of a vector, there exist no way of -uniquely- defining the particular vectors that went into defining it.

DrChinese said:
5. I know of no one who can provide ANY decent picture, and I certainly can't help. There are multiple interpretations, take your pick.
That's more than a little difficult when you seem to falsely represent any particular contextualization of variables as a Fair Sampling argument. Refer back to 2. where your response was unrelated to my objection to labeling contextualization arguments as a Fair Sampling argument.
 
  • #730
my_wan said:
Consider, in the pure case where you got it right, where you move a detector setting from 0 to 22.5 degrees. The route switching statistics look like cos^2(22.5) = sin^2(67.5), thus you are correct about the pure polarization pairs at 90 degree offsets.
To set it straight it's |cos^2(22.5)-cos^2(0)| and |sin^2(22.5)-sin^2(0)|

my_wan said:
Now notice that cos^2(theta) = sin^2(theta +- 90) for ANY arbitrary theta. Now add a second pair of pure H and V photons polarizations that is offset 45 degrees from the first pair. Now at a 0 angle detector setting you've added 50% more photons to be detected from the new H and 50% from the new V polarization beams. Since cos^2(theta) = sin^2(theta +- 90) in ALL cases the overall statistics have not changed.
The same way as above
|cos^2(67.5)-cos^2(45)| and it is not equal to |cos^2(22.5)-cos^2(0)|
and
|sin^2(67.5)-sin^2(45)| and it is not equal to |sin^2(22.5)-sin^2(0)|

so if you add H and V photons that are offset by 45 degrees you change your statistics.

my_wan said:
So if you still don't believe it, show me. If you want a computer program that uses a random number generator, to generate randomly polarized photons and send them to a virtual detector, ask. I can write the program pretty quick. You'll need AutoIt (freeware, not nagware) if you don't want to be sent an exe. With AutotIt installed, you can run the script directly without compiling it.
I would stick to simple example:
Code:
      polarizer at 0    polarizer at 22.5
p=0   cos^2(0-0)  =1    cos^2(0-22.5)  =0.85  difference=0.15
p=45  cos^2(45-0) =0.5  cos^2(45-22.5) =0.85  difference=0.35
p=90  cos^2(90-0) =0    cos^2(90-22.5) =0.15  difference=0.15
p=135 cos^2(135-0)=0.5  cos^2(135-22.5)=0.15  difference=0.35
average difference=0.25

my_wan said:
I also just notice you contradicted yourself. You say:
1) ...it is reasonable only if you assume that you have source with even mixture of H and V photons.
2) If you have source that consists of even mixture of photons with any polarization then reasonable assumption is that ~25% changed their channel.

But a random distribution is an "even mixture of H and V" as defined by 1), just not all on the same 2 axis. For a random distribution, there statistically exist both an opposite and perpendicular case for every possible polarization instance.
The statement in bold makes the difference between 1) and 2).
 
  • #731
zonde said:
To set it straight it's |cos^2(22.5)-cos^2(0)| and |sin^2(22.5)-sin^2(0)|

This formula breaks at arbitrary settings, because you are comparing a measurement at 22.5 to a different measurement at 0 that you are not even measuring at the that time. You have 2 photon routes in any 1 measurement, not 2 polarizer setting in any 1 measurement. Instead you have one measurement at one location and what you are comparing is the photon statistics that take a particular route through a polarizer at that one setting, not 2 settings.

In your formula you have essentially subtracted V polarizations from V polarizations not being measured at that setting, and visa versa for H polarization. We are NOT talking EPR correlations here, only normal photon route statistics as defined by a single polarizer.

Consider, you have 1 polarizer at 1 setting (0 degrees) with 1 uncorrelated beam pointed at it, such that 50% of the light goes through. You change settings to 22.5 degrees. Now 15% of the V photons switch from going through to not going through the detector, sin^2(22.5). Now at the SAME 22.5 degree setting, you get cos^2(67.5) = 15% more detections from the H photons. 15% lost from V and 15% gained from H. This is even more general in that sin^2(theta) = |cos^2(90-theta)| for all theta. This is NOT a counterfactual measure. This is what you get from the one measure you are getting at the one setting. So you can't use cos from the previous measurement you are not currently measuring. Else it amounts to subtracting cos from a cos that's not even part of the polarizer setting at that time, which breaks it's consistency with BI violations statistics for other possible settings.

ONLY include the statistics of whatever measurement you are performing at THAT time, and you get statistical consistency between BI violations and photon route switching without correlations, with purely randomized photon polarizations. The key is DON'T mix the math for both settings for one measurement. This is key to subverting the couterfactuals in BI and still getting the same statistics. Only count what photons you can empirically expect to switch routes upon switching to that ONE setting by counting H adds and V subtracts at that ONE setting.

Then, by noting it's applicable at all thetas it remains perfectly valid for fully randomized photon polarizations at ANY arbitrary setting, provided you are allowed to arbitrarily relabel the 0 point of the non-physical coordinate labels.
 
  • #732
Besides, you can't change my formula, then claim my formula doesn't do what I claimed because the formula you swapped in doesn't. :wink:
 
  • #733
my_wan said:
This formula breaks at arbitrary settings, because you are comparing a measurement at 22.5 to a different measurement at 0 that you are not even measuring at the that time. You have 2 photon routes in any 1 measurement, not 2 polarizer setting in any 1 measurement. Instead you have one measurement at one location and what you are comparing is the photon statistics that take a particular route through a polarizer at that one setting, not 2 settings.

In your formula you have essentially subtracted V polarizations from V polarizations not being measured at that setting, and visa versa for H polarization. We are NOT talking EPR correlations here, only normal photon route statistics as defined by a single polarizer.
Yes, that is only speculation. Nothing straightforwardly testable.

my_wan said:
Consider, you have 1 polarizer at 1 setting (0 degrees) with 1 uncorrelated beam pointed at it, such that 50% of the light goes through. You change settings to 22.5 degrees. Now 15% of the V photons switch from going through to not going through the detector, sin^2(22.5). Now at the SAME 22.5 degree setting, you get cos^2(67.5) = 15% more detections from the H photons. 15% lost from V and 15% gained from H.
Or lost 35% and gained 35%. Or lost x% and gained x%.
The question is not about lost photon count = gained photon count.
Question is about this number - 15%.
You will keep insisting that it's 15% because it's 15% both ways then we can stop our discussion right there.

my_wan said:
This is even more general in that sin^2(theta) = |cos^2(90-theta)| for all theta.
sin(theta)=cos(90-theta) is trivial trigonometric identity. What you expect to prove with that?

my_wan said:
This is NOT a counterfactual measure. This is what you get from the one measure you are getting at the one setting. So you can't use cos from the previous measurement you are not currently measuring. Else it amounts to subtracting cos from a cos that's not even part of the polarizer setting at that time, which breaks it's consistency with BI violations statistics for other possible settings.

ONLY include the statistics of whatever measurement you are performing at THAT time, and you get statistical consistency between BI violations and photon route switching without correlations, with purely randomized photon polarizations. The key is DON'T mix the math for both settings for one measurement. This is key to subverting the couterfactuals in BI and still getting the same statistics. Only count what photons you can empirically expect to switch routes upon switching to that ONE setting by counting H adds and V subtracts at that ONE setting.
Switch routes to ... FROM what?
You have no switching with ONE setting. You have to have switching FROM ... TO ... otherwise there is no switching.
 
  • #734
my_wan said:
That's more than a little difficult when you seem to falsely represent any particular contextualization of variables as a Fair Sampling argument. Refer back to 2. where your response was unrelated to my objection to labeling contextualization arguments as a Fair Sampling argument.

To me, the (Un)Fair Sampling argument is as follows: "The full universe does not respect Bell's Inequality (or similar), while a sample does. The reason an attributes of the sample is different than that of the universe is that certain data elements are more likely to be detected than others, causing a skewing of the results."

I reject this argument as untenable; however, I would say my position is not generally accepted. A more generally accepted argument is that the GHZ argument renders the Fair Sampling assumption moot.

Now, I am not sure how this crept into our discussion except that as I recall, you indicated that this had some relevance to Bell. I think it is more relevant to tests of Bell's Inequality, which we aren't really discussing. So if there is nothing further to this line, we can drop it.
 
  • #735
my_wan said:
1) You say: "Your argument here does not follow regarding vectors. So what if it is or is not true?", but the claim about this aspect of vectors is factually true. Read this carefully:
http://www.vias.org/physics/bk1_09_05.html
Note: Multiplying vectors from a pool ball collision under 2 different coordinate systems don't just lead to the same answer expressed in a different coordinate system, but an entirely different answer altogether. For this reason such vector operations are generally avoided, using scalar multiplication instead. Yet the Born rule and cos^2(theta) do just that.
2) You say: "I think you are trying to say that if vectors don't commute, then by definition local realism is ruled out.", but they don't commute for pool balls either, when used this way. That doesn't make pool balls not real. Thus the formalism has issues in this respect, not the reality of the pool balls. I even explained why: because given only the product of a vector, there exist no way of -uniquely- defining the particular vectors that went into defining it.

Again, I am missing your point. So what? How does this relate to Bell's Theorem or local realism?
 
  • #736
DrChinese said:
A more generally accepted argument is that the GHZ argument renders the Fair Sampling assumption moot.
Can you produce some reference?

I gave reference for the opposite in my post https://www.physicsforums.com/showthread.php?p=2760591#post2760591" but this paper is not freely accessible so it's hard to discuss it. But if you will give your reference then maybe we will be able to discuss the point.
 
Last edited by a moderator:
  • #737
ThomasT said:
The EPR view was that the element of reality at B determined by a measurement at A wasn't reasonably explained by spooky action at a distance -- but rather that it was reasonably explained by deductive logic, given the applicable conservation laws.

That is, given a situation in which two disturbances are emitted via a common source and subsequently measured, or a situation in which two disturbances interact and are subsequently measured, then the subsequent measurement of one will allow deductions regarding certain motional properties of the other.
If this is a local theory in which any correlations between the two disturbances are explained by properties given to them by the common source, with the disturbances just carrying the same properties along with them as they travel, then this is exactly the sort of theory that Bell examined, and showed that such theories imply certain conclusions about the statistics we find when we measure the "disturbances", the Bell inequalities. Since these inequalities are violated experimentally, this is taken as a falsification of any such local theory which explains correlations in terms of common properties given to the particles by the source.

Again, you might take a look at the lotto card analogy I offered in post #2 here. If Alice and Bob are each sent scratch lotto cards with a choice of one of three boxes to scratch, and we find that on every trial where they choose the same box to scratch they end up seeing the same fruit, a natural theory would be that the source is always creating pairs of cards that have the same set of "hidden fruits" behind each of the three boxes. But this leads to the conclusion that on the trials where they choose different boxes there should be at least a 1/3 probability they'll see the same fruit, so if the actual observed frequency of seeing the same fruit when they scratch different boxes is some smaller number like 1/4, this can be taken as a falsification of the idea that the identical results when identical boxes are chosen can be explained by each card being assigned identical hidden properties by the source.
ThomasT said:
Do you doubt that this is the view of virtually all physicists?
Virtually all physicists would agree that the violation of Bell inequalities constitutes a falsification of the kind of theory you describe, assuming you're talking about a purely local theory.
 
  • #738
zonde said:
Can you produce some reference?

I gave reference for the opposite in my post https://www.physicsforums.com/showthread.php?p=2760591#post2760591" but this paper is not freely accessible so it's hard to discuss it. But if you will give your reference then maybe we will be able to discuss the point.

Here are a couple that may help us:

Theory:
http://www.cs.rochester.edu/~cding/Teaching/573Spring2005/ur_only/GHZ-AJP90.pdf

Experiment:
http://arxiv.org/abs/quant-ph/9810035

"It is demonstrated that the premisses of the Einstein-Podolsky-Rosen paper are inconsistent when applied to quantum systems consisting of at least three particles. The demonstration reveals that the EPR program contradicts quantum mechanics even for the cases of perfect correlations. By perfect correlations is meant arrangements by which the result of the measurement on one particle can be predicted with certainty given the outcomes of measurements on the other particles of the system. This incompatibility with quantum mechanics is stronger than the one previously revealed for two-particle systems by Bell's inequality, where no contradiction arises at the level of perfect correlations. Both spin-correlation and multiparticle interferometry examples are given of suitable three- and four-particle arrangements, both at the gedanken and at the real experiment level. "
 
Last edited by a moderator:
  • #739
ThomasT said:
The EPR view was that the element of reality at B determined by a measurement at A wasn't reasonably explained by spooky action at a distance -- but rather that it was reasonably explained by deductive logic, given the applicable conservation laws.

That is, given a situation in which two disturbances are emitted via a common source and subsequently measured, or a situation in which two disturbances interact and are subsequently measured, then the subsequent measurement of one will allow deductions regarding certain motional properties of the other.

Do you doubt that this is the view of virtually all physicists?

Do you see anything wrong with this view?

Sorry, I may have missed this post, and I saw JesseM replying so I thought I would chime in...

The EPR conclusion is most certainly not the view which is currently accepted. That is because the EPR view has been theoretically (Bell) and experimentally (Aspect) rejected. But that was not the case in 1935. At that time, the jury was still out.

What is wrong with this view is that it violates the Heisenberg Uncertainty Principle. Nature does not allow that.
 
  • #740
zonde said:
Yes, that is only speculation. Nothing straightforwardly testable.
It demonstrably consistent with any test. This consistency is taken from the fact that if you take a polarized beam and offset a polarizer in its path, offset defined by the difference between light polarization and polarizer setting, the statistics of what is passed, defined by the light intensity making it through that polarizer, exactly matches in all cases the assumptions I am making.

To demonstrate you can use this polarizer applet:
http://www.lon-capa.org/~mmp/kap24/polarizers/Polarizer.htm
Just add a second polarizer and consider the light coming through the first polarizer your polarized beam, which means you double whatever percentage is read, because the 50% lost to the first polarizer doesn't count.

zonde said:
Or lost 35% and gained 35%. Or lost x% and gained x%.
The question is not about lost photon count = gained photon count.
Question is about this number - 15%.
You will keep insisting that it's 15% because it's 15% both ways then we can stop our discussion right there.
The number 15% only results from the 22.5 setting. If we use a 45 setting then it's 50% lost and 50% gained. Any setting cos^(theta) defines both lost and gained because sin^2(theta) = |cos^2(90-theta)| in all cases. There is nothing special about 22.5 and 15%.


zonde said:
sin(theta)=cos(90-theta) is trivial trigonometric identity. What you expect to prove with that?
That's why it constitutes a proof at all angle, and not just the 22.5 degree setting that gets 15% lost and gained in the example used.

zonde said:
Switch routes to ... FROM what?
You have no switching with ONE setting. You have to have switching FROM ... TO ... otherwise there is no switching.
Lost is photons that would have passed the polarizer but didn't at that setting. Gained is what wouldn't have passed the polarizer but did at that setting. Let's look at it using a PBS so we can divide things in H, V, and L, R routes through the polarizer.

Consider a PBS rather than a plain polarizer placed in front of a simple polarized beam of light that evenly contains pure H an V polarized photons. We'll label the V polarization as angle 0. So, a PBS set a angle 0 will have 100% of the V photons takes L route, and 100% of the H photons takes R. At 22.5 degrees L is ~85% V photons and ~15% H photons, while R beams now contains ~15% V photons and ~85% H photons. WARNING: You have to consider that by measuring the photons at a new setting, it changes the photons polarization to be consistent with that new setting. At a setting of 45 degree you get 50% H and 50% V going L, and 50% H and 50% V going R. Nothing special about 15% or the 22.5 degree setting.

Now what the sin^2(theta) = cos^2(90-theta) represents here is anyone (but only one) polarizer setting, such that theta=theta in both cases, and our sin^2(theta) is V photons that switch to the R route, while cos^2(90-theta) is the H photons that switch to the L route.

Now since this is a trig identity for all cases, it valid for ANY uniform mixture of polarizations, whether 2 pure H and V beams or a random distribution, which by definition is a uniform mixture of polarizations.

It would even be easy to make non-uniform beam mixtures, where certain ranges of polarizations are missing in the beam, such that the sin^2(theta) = cos^2(90-theta) can be used to define the ratios of beam intensities as theta, the polarizer setting, is adjusted. If ANY situation can be defined where sin^2(theta) = cos^2(90-theta) doesn't properly predict beam intensity ratios, from any crafted beam mixture, then I'm wrong.

And here's the kicker: by defining properties in terms of photons properties, rather than properties as defined by the polarizer settings that detect them, and using these polarizer path statistics, BI violations statistics also result as a consequence.
 
  • #741
DrChinese said:
Again, I am missing your point. So what? How does this relate to Bell's Theorem or local realism?
It relates to the arbitrary angle condition placed on the modeling of hv models and nothing else.
Consider:
Hidden variable model successfuly Models QM coincidence statistics, but requires coordinate freedom that is objected to. The following properties are noted:
1) One ot the other, but not both detector settings must be defined to have a 0 angle setting. (objection noted)
2) The detector defined as a zero setting has zero information about the other detectors setting.
3) The zero setting can be arbitrarily changed to any absolute setting along with the detector angle changes with or WITHOUT redefining absolute photon polarizations in the process.
4) The default photon polarizations can be rotated with absolute impunity, having no effect whatsoever on coincidence statistic.
5) The only thing considered for detections/non-detections is the photon polarization relative to the setting of the detector it actually hit.

Thus this proves the 0 coordinate requirement in no way hinges upon physical properties unique to the angles chosen. It is a mathematical artifact, related to non-commuting vectors. It's essentially equivalent of giving only the path of a pool ball and demanding that the path of q-ball that hit it must be uniquely calculable in order to prove pool balls are real.

I'll get around to attempting to use predefined non-commutative vectors to get around it soon, but I have grave doubts. Disallowing arbitrary 0 coordinates is tantamount to disallowing an inertial observer from self defining their own velocity as 0, which requires a universal 0 velocity.

At the very least, I would appreciate if you quit misrepresenting the 0 angle condition as a statistically unique physical state at that angle.
 
  • #742
JesseM said:
If this is a local theory in which any correlations between the two disturbances are explained by properties given to them by the common source, with the disturbances just carrying the same properties along with them as they travel, then this is exactly the sort of theory that Bell examined, and showed that such theories imply certain conclusions about the statistics we find when we measure the "disturbances", the Bell inequalities. Since these inequalities are violated experimentally, this is taken as a falsification of any such local theory which explains correlations in terms of common properties given to the particles by the source.

When you say: "this is exactly the sort of theory that Bell examined", it does require some presumptive caveats. That is that the properties that is supposed as carried by the photons are uniquely identified by the route it takes at a detector.

If a particle has a perfectly distinct property, in which a detector setting tuned to a nearby setting has some nonlinear odds of defining the property as equal to that offset setting, then BI violations ensue. The problem for models is that vector product are non-commutative, requiring a 0 angle to be defined for one of the detectors.

Consider a hv model that models BI violations, but has the 0 setting condition. You can assign one coordinate system to the emitter, which the detectors know nothing about. Another coordinate system to the detectors, which the emitter knows nothing about, but rotates in tandem with one or the other detector. Now rotating the emitter has absolutely no effect on the coincidence statistics whatsoever, thus proving that the statistics is not unique to physical states of the particles at a given setting. You can also have any arbitrary offset between the two detectors, and consistency with QM is also maintained. Thus the non-commutativity of vectors is the stumbling block for such models. But the complete insensitivity to arbitrary emitter settings proves it's not a physical stumbling block.

So perhaps you can explain to me the physical significant of requiring a non-physical coordinate choice to give exactly the same answers to vector products, under arbitrary rotations, when you can't even do that on a pool table?
 
  • #743
my_wan said:
Lost is photons that would have passed the polarizer but didn't at that setting. Gained is what wouldn't have passed the polarizer but did at that setting. Let's look at it using a PBS so we can divide things in H, V, and L, R routes through the polarizer.

Consider a PBS rather than a plain polarizer placed in front of a simple polarized beam of light that evenly contains pure H an V polarized photons. We'll label the V polarization as angle 0. So, a PBS set a angle 0 will have 100% of the V photons takes L route, and 100% of the H photons takes R. At 22.5 degrees L is ~85% V photons and ~15% H photons, while R beams now contains ~15% V photons and ~85% H photons. WARNING: You have to consider that by measuring the photons at a new setting, it changes the photons polarization to be consistent with that new setting. At a setting of 45 degree you get 50% H and 50% V going L, and 50% H and 50% V going R. Nothing special about 15% or the 22.5 degree setting.
You compare measurement at 22.5 angle with hypothetical measurement at 0 angle. When I used similar reasoning your comment was that:

This formula breaks at arbitrary settings, because you are comparing a measurement at 22.5 to a different measurement at 0 that you are not even measuring at the that time. You have 2 photon routes in any 1 measurement, not 2 polarizer setting in any 1 measurement. Instead you have one measurement at one location and what you are comparing is the photon statistics that take a particular route through a polarizer at that one setting, not 2 settings.

In your formula you have essentially subtracted V polarizations from V polarizations not being measured at that setting, and visa versa for H polarization. We are NOT talking EPR correlations here, only normal photon route statistics as defined by a single polarizer.
So how is your reasoning so radically different than mine that you are allowed to use reasoning like that but I am not allowed?

But let's say it's fine and look at a bit modified case of yours.
Now take beam of light that consists of H, V, +45 and -45 polarized light. What angle should be taken as 0 angle in this case? Let's say it's again V polarization that is 0 angle. Can you work out photon rates in L and R beams for all photons (H,V,+45,-45)?

my_wan said:
Now what the sin^2(theta) = cos^2(90-theta) represents here is anyone (but only one) polarizer setting, such that theta=theta in both cases, and our sin^2(theta) is V photons that switch to the R route, while cos^2(90-theta) is the H photons that switch to the L route.
How you define theta? Is it angle between polarization axis of polarizer (PBS) and photon so that we have theta1 for H and theta2 for V with condition that theta1=theta2-90?
Otherwise it's quite unclear what you mean with your statement.
 
  • #744
zonde said:
So how is your reasoning so radically different than mine that you are allowed to use reasoning like that but I am not allowed?

When I give the formula sin^2(theta) = |cos^2(90-theta)| theta and theta are the same number from the same measurement. Hence:
sin^2(0) = |cos^2(90-0)|
sin^2(22.5) = |cos^2(90-22.5)|
sin^2(45) = |cos^2(90-45)|
etc.

You only make presumptions about the path statistics of individual photons, and wait till after the fact to do any comparing to another measurement.

You previously give the formula:
zonde said:
To set it straight it's |cos^2(22.5)-cos^2(0)| and |sin^2(22.5)-sin^2(0)|
Here you put in the 0 from the first measurement as if it's part of what you are now measuring. It's not. The 22.5 is ALL that you are now measuring. The only thing you are comparing after the fact is the resulting path effects. You don't include measurements you are not presently performing to calculate the results of the measurement you are now making. This is to keep the reasoning separate, and avoid the interdependence inherent in the presumed non-local aspect of EPR correlations. It also allows you to compare it to any arbitrary other measurement without redoing the calculation. It's a non-trivial condition of modeling EPR correlations without non-local effects to keep the measurements separate. On these grounds alone mixing settings from other measurements to calculate results of the present measurement must be rejected. Only the after the fact results may be compared, to see if the local path assumptions remain empirically and universally consistent, with and without EPR correlations.

The primary issue remains whether the path statistics are consistent for both the pure H and V case and the randomized polarization case. This is the point on which I will put my pride ALL in by stating this is unequivocally a factual yes. This should also calculate cases in which the intensity of H and V are unequal, giving the variations of intensity at various polarizer settings at different angles. Such non-uniform beam mixtures to test this can quiet easily be experimentally tested. From a QM perspective this would be equivalent to interference in the wavefunction at certain angles.
 
  • #745
DrChinese said:
Nice, it's exactly the same paper I looked at. I was just unsure if posting that link doesn't violate forum rules.
As the file is not searchable I can point out that the text I quoted can be found on p.1136 in the last full paragraph (end of the page).

DrChinese said:
Experiment:
http://arxiv.org/abs/quant-ph/9810035

"It is demonstrated that the premisses of the Einstein-Podolsky-Rosen paper are inconsistent when applied to quantum systems consisting of at least three particles. The demonstration reveals that the EPR program contradicts quantum mechanics even for the cases of perfect correlations. By perfect correlations is meant arrangements by which the result of the measurement on one particle can be predicted with certainty given the outcomes of measurements on the other particles of the system. This incompatibility with quantum mechanics is stronger than the one previously revealed for two-particle systems by Bell's inequality, where no contradiction arises at the level of perfect correlations. Both spin-correlation and multiparticle interferometry examples are given of suitable three- and four-particle arrangements, both at the gedanken and at the real experiment level. "
I think I caught the point you are making.

Let's see if I will be able to explain my objections from the viewpoint of contextuality.
First about EPR, Bell and non-contextuality.
If we take photon that has polarization angle 0° and put it through polarizer at angle 0° it goes through with certainty. However if we change polarizer angle to 45° it goes through with 50% chance (that's basically Malus law).
So when we have entangled photons we have a prediction that 50% chance is somehow correlated between two entangle photons.
Bell's solution to this was non-contextuality i.e. photon is predetermined to take his chance one way or the other way. I would argue that EPR does not contain any considerations regarding solution of this particular problem - it was just the statement of general problem.

So what are other options different from Bell's solution. As I see other solution is that photons can be considered as taking this 50% chance (under 45° measurement base) dependent from particular conditions of polarizer (context of measurement). But in that case it is obvious that this correlation between two entangled photons of taking chances the same way should be correlation between measurement conditions of two photons and not only correlation between photons themselves. This of course leaves the question how measurement conditions get "entangled" and here I speculate that some leading photons from ensemble transfer their "entanglement" to equipment at the cost of becoming uncorrelated.
That way we have classical correlation when we measure photons in the same base as they were created (0° and 90° measurement base) and quantum (measurement context) correlation when we measure photons using incompatible base from the one they were created in (+45° and -45° measurement base).

Now if we go back to GHZ. These inequalities where derived using Bell's non-contextual approach. If we look at them from perspective of contextuality then we can see that this measurement context correlation is not strictly tied to photon polarizations but by varying experiment setup it could be possible to get quite different correlations then the ones you would expect from pure classical polarization correlations.
And if we isolate conditions so that we measure mostly measurement context correlations then pure classical polarization correlations will be only indirectly related to observed results.
 
  • #746
my_wan said:
When I give the formula sin^2(theta) = |cos^2(90-theta)| theta and theta are the same number from the same measurement.
Please tell me what theta represents physically.

As I asked already:
How you define theta? Is it angle between polarization axis of polarizer (PBS) and photon so that we have theta1 for H and theta2 for V with condition that theta1=theta2-90?
Or it's something else?
 
  • #747
zonde said:
Now if we go back to GHZ. ...

Imagine that for a Bell Inequality, you look at some group of observations. The local realistic expectation is different from the QM expectation by a few %. Perhaps 30% versus 25% or something like that.

On the other hand, GHZ essentially makes a prediction of Heads for LR, and Tails for QM every time. You essentially NEVER get a Heads in an actual experiment, every event is Tails. So you don't have to ask whether the sample is fair. There can be no bias - unless Heads events are per se not detectible, but how could that be? There are no Tails events ever predicted according to Realism.

So using a different attack on Local Realism, you get the same results: Local Realism is ruled out. Now again, there is a slight split here are there are scientists who conclude from GHZ that Realism (non-contextuality) is excluded in all forms. And there are others who restrict this conclusion only to Local Realism.
 
  • #748
my_wan said:
When you say: "this is exactly the sort of theory that Bell examined", it does require some presumptive caveats. That is that the properties that is supposed as carried by the photons are uniquely identified by the route it takes at a detector.

If a particle has a perfectly distinct property, in which a detector setting tuned to a nearby setting has some nonlinear odds of defining the property as equal to that offset setting, then BI violations ensue.
Do you just mean that local properties of the particle are affected by local properties of the detector it comes into contact with? If so, no, this cannot lead to any violations of the Bell inequalities. Suppose the experimenters each have a choice of three detector settings, and they find that on any trial where they both chose the same detector setting they always got the same measurement outcome. Then in a local hidden variables model where you have some variables associated with the particle and some with the detector, the only way to explain this is to suppose the variables associated with the two particles predetermined the result they would give for each of the three detector settings; if there was any probabilistic element to how the variables of the particles interacted with the state of the detector to produce a measurement outcome, then there would be a finite probability that the two experimenters could both choose the same detector setting and get different outcomes. Do you disagree?
my_wan said:
Consider a hv model that models BI violations, but has the 0 setting condition. You can assign one coordinate system to the emitter, which the detectors know nothing about. Another coordinate system to the detectors, which the emitter knows nothing about, but rotates in tandem with one or the other detector.
What do you mean by "assigning" coordinate systems? Coordinate systems are not associated with physical objects, they are just aspects of how we analyze a physical situation by assigning space and time coordinates to different events. Any physical situation can be analyzed using any coordinate system you like, the choice of coordinate system cannot affect your predictions about coordinate-invariant physical facts.

Anyway, your description isn't at all clear, could you come up with a mathematical description of the type of "hv model" you're imagining, rather than a verbal one?
 
  • #749
DrChinese said:
ThomasT is refuted in a separate post in which I provided a quote from Zeilinger. I can provide a similar quote from nearly any major researcher in the field. And all of them use language which is nearly identical to my own (since I closely copy them). So YES, the generally accepted view does use language like I do.
Yes, the generally accepted view does use language like you do. And the generally accepted view for 30 years was that von Neumann's proof disallowed hidden variable theories, even though that proof had been shown to be unacceptable some 30 years before Bell's paper.

Zeilinger's language in the quote you provided, and the general tone of his continuing program, and your language wrt Bell, indicate to me that neither of you understand the subtleties of the arguments being presented here and in certain papers (which are, evidently, not being as clearly presented as necessary) regarding the interpretation of Bell's theorem (ie., the physical basis of Bell inequalities).

You can provide all the quotes you want. Quotes don't refute arguments. You're going to have to refute some purported LR models that reproduce qm predictions but are not rendered in the form of Bell's LHV model.

However, you refuse to look at them because:

DrChinese said:
I have a requirement that is the same requirement as any other scientist: provide a local realistic theory that can provide data values for 3 simultaneous settings (i.e. fulfilling the realism requirement). The only model that does this that I am aware of is the simulation model of De Raedt et al. There are no others to consider. There are, as you say, a number of other *CLAIMED* models yet none of these fulfill the realism requirement. Therefore, I will not look at them.

Please explain what you mean by "a local realistic model that can provide data values for 3 simultaneous settings". Three simultaneous settings of what? In the archetypal optical Bell test setup there's an emitter, two polarizers, and two detectors. The value of (a-b), the angular difference in the polarizer settings, can't have more than one value associated with any given pair of detection attributes. So, I just don't know what you're talking about wrt your 'requirement'.

My not understanding your 'requirement' might well be just a 'mental block' of some sort on my part. In any case, before we can continue, so that you might actually 'refute' something (which you haven't yet), you're going to have to explain, as clearly as you can, what this "data values for 3 simultaneous settings" means and how it is a 'requirement' that purported LR models of entanglement must conform to.

DrChinese said:
(Again, an exception for the De Raedt model which has a different set of issues entirely.)
My understanding is that a simulation is not, per se, a model. So, a simulation might do what a model can't. If this is incorrect, then please inform me. But if it is incorrect, then what's the point of a simulation -- when a model would suffice?

Here's my thinking about this: suppose we eventually get a simulation of an optical Bell test which reproduces the observed results. And further suppose that this simulation involves only 'locally' produced 'relationships' between counter-propagating optical disturbances. And further suppose that this simulation can only be modeled in a nonseparable (nonfactorizable) way. Then what might that tell us about Bell's ansatz?
 
  • #750
DevilsAvocado said:
I must inform the casual reader: Don’t believe everything you read at PF, especially if the poster defines you as "less sophisticated".
No offense DA, but you are 'the casual reader'.

DevilsAvocado said:
Everything is very simple: If you have one peer reviewed theory (without references or link) stating that 2 + 2 = 5 and a generally accepted and mathematical proven theorem stating 2 + 2 = 4, then one of them must be false.
No. Interpreting Bell's theorem (ie., Bell inequalities) is not that simple. If it was then physicists, and logicians, and mathematicians wouldn't still be at odds about the physical meaning of Bell's theorem. But they are, regardless of the fact that those trying to clarify matters are, apparently, a small minority at the present time.

DevilsAvocado said:
And remember: Bell’s theorem has absolutely nothing to do with "elementary optics" or any other "optics", I repeat – absolutely nothing. Period.
Do you think that optical Bell tests (which comprise almost all Bell tests to date) have nothing to do with optics? Even the 'casual reader' will sense that something is wrong with that assessment.

The point is that if optical Bell tests have to do with optics, then any model of those experimental situations must have to do with optics also.

By the way, the fact that I think you're way off in your thinking on this doesn't diminish my admiration for your obvious desire to learn, and your contributions to this thread. Your zealous investigations and often amusing and informative posts are most welcome. And, I still feel like an idiot for overreacting to what I took at the time to be an unnecessarily slanderous post. (Maybe I was just having a bad day. Or, maybe, it isn't within your purview to make statements about other posters' orientations regarding scientific methodology -- unless they've clearly indicated that orientation. The fact is that the correct application of the scientific method sometimes requires deep logical analysis. My view, and the view of many others, is that Bell's 'logical' analysis didn't go deep enough. And, therefore, the common interpretations of Bell's theorem are flawed.)

So, while it's granted that your, and DrC's, and maybe even most physicists, current opinion and expression regarding the physical meaning of violations of BIs is the 'common' view -- consider the possibility that you just might be missing something. You seem to understand that Bell's theorem has nothing to do with optics. I agree. Is that maybe one way of approaching, and understanding, the question of why Bell's ansatz gives incorrect predictions wrt optical Bell tests?
 

Similar threads

Replies
45
Views
3K
Replies
4
Views
1K
Replies
18
Views
3K
Replies
6
Views
2K
Replies
2
Views
2K
Replies
100
Views
10K
Back
Top