Is action at a distance possible as envisaged by the EPR Paradox.

  • #401
DrChinese said:
Here is the link to the Excel spreadsheet models I created around the De Raedt simulations:

http://www.drchinese.com/David/DeRaedtComputerSimulation.EPRBwithPhotons.B.xls"

To see the code I wrote, go into the Visual Basic editor. Sheet A shows their model working correctly. Sheet B shows their model working incorrectly for a setup which matches their base assumptions.


DrC, I agree with IcedEcliptic, this is very impressive work for a "hobbyist"! Kudos and +11 on "my scale"! :smile:

There’s a lot I want to comment in the last posts, but time is running out for today, but your code is so interesting I can’t wait:

I checked the VB code and saw that you are using VB’s pseudorandom number generator http://msdn.microsoft.com/en-us/library/f7s023d2.aspx" (to get new a new seed value). Could this be an "issue" (since QM is true random)?

If you consider this an issue, there could be a solution in the http://msdn.microsoft.com/en-us/library/system.security.cryptography.randomnumbergenerator.aspx" for automated clients (HTTP Interface).

Tomorrow I’ll be back to 'tackle' the rest, cheers!
 
Last edited by a moderator:
Physics news on Phys.org
  • #402
ThomasT said:
What is a 'dataset' going to tell you that the expectation value and correlation function doesn't already??

I give up.
 
  • #403
Time for cake! +10,000 views! :biggrin:

400px-Birthday_cake.jpg
 
  • #404
devilsavocado said:
time for cake! +10,000 views! :biggrin:

[PLAIN]http://upload.wikimedia.org/wikipedia/commons/thumb/4/4f/birthday_cake.jpg/400px-birthday_cake.jpg[/quote]

love it!
 
Last edited by a moderator:
  • #405
Good to know there are so many folks following this great thread! I'm enjoying it very much.
 
  • #406
Tasty looking cake! I assume the two pictures of it are entangled? If Dr. Chinese had photoshopped the candles to be out, in contrast to the "on" above, I would have died laughing.
 
  • #407
Whoooooossssh
s6lsig.jpg


Glad you all liked it!

@IcedEcliptic, of course they are entangled! Now I will call 9-1-1! :smile:
 
  • #408
DevilsAvocado said:
Whoooooossssh
s6lsig.jpg


Glad you all liked it!

@IcedEcliptic, of course they are entangled! Now I will call 9-1-1! :smile:


Wigner's Cake. :biggrin:
 
  • #409
Yup :biggrin:
 
  • #410
DevilsAvocado said:
DrC, I agree with IcedEcliptic, this is very impressive work for a "hobbyist"! Kudos and +11 on "my scale"! :smile:

There’s a lot I want to comment in the last posts, but time is running out for today, but your code is so interesting I can’t wait:

I checked the VB code and saw that you are using VB’s pseudorandom number generator http://msdn.microsoft.com/en-us/library/f7s023d2.aspx" (to get new a new seed value). Could this be an "issue" (since QM is true random)?

If you consider this an issue, there could be a solution in the http://msdn.microsoft.com/en-us/library/system.security.cryptography.randomnumbergenerator.aspx" for automated clients (HTTP Interface).

Tomorrow I’ll be back to 'tackle' the rest, cheers!

Yes, I know it is pseudo-random. After a while you will realize that it does not need to be "truly" random. It is just a simulation to get things moving on how everything *should* work in this area. However, it might be worth you adding that element to see how it changes things. Thanks for the reference by the way, I will check it out.
 
Last edited by a moderator:
  • #411
DrChinese said:
EPR denied we live in a world in which Alice changes spacelike separated Bob's reality. So if your theory allows Alice's reality to change Bob's (or vice versa), I consider it to be context dependent. And that would be fully consistent with the Bell result.
I'm still not sure how you can represent "context dependent" that way from what I described. It would be somewhat analogous to saying Alice changed the reality of spacelike separated Bob by accelerating, thus changing Bob's velocity non-locally.

DrChinese said:
Now, a serious problem exists in any NON-contextual candidate theory because you must explain correlations for entangled particles at the same angles, while also explaining why unentangled particles are NOT correlated. You also have the Bell inequalities to contend with. So these are severe constraints which are not present in either contextual or nonlocal theories.
If it's modeled contextually in the relativistic sense above, then "now" is simply "now" as defined the detector, with nothing else involved. Same way Alice changed Bob's velocity, by accelerating herself "now" as defined by Alice.

I understand the ease with modeling EPR correlation when the setting for one end always stays the same, and how that breaks to varying degrees under arbitrary detector settings. We operate on a reasonable assumption, with a random spin sequence of particles, single detector settings can't make a statistical difference. I'm going to call this assumption into question wrt detection sequence, not detection rates, even though the sequence appears random to us at any single detector.

Assumptions (I'll use spin only here):
1) Spin is a distinct "real" property, regardless of time dependence, relational character, etc.
2) By 1) spin has a distinctly "real" anti-correlation with its correlated pair.

Now, given the above assumptions, when a particle enters a detectors polarizer, the particle polarization relative to the detectors polarizer has a distinctly "real" physical meaning. Thus the empirical (apparently) random detection sequence is determined by this relative particle/polarizer angle. By 1), through nothing more than simple geometry, this provides information about the "real" polarization of its pair by 2). Likewise for the other particle. Thus, through simple geometry and the realness of spin defined by 1) and 2), a zero setting for the detectors is uniquely defined by the polarizer/spin angle, regardless of the experimenters knowledge or choice in how the detectors zero setting is chosen.

If the particle spin/polarizer angle has "real" physical meaning, arbitrary choices become moot, as that provides an reference to a unique angle inversely common to both particles. Thus a relation that provides for Bell's inequalities, where anyone detector is predefined, is valid under arbitrary choices.

So now we can add 3) to our assumptions:
3) By 2) the relative spin to polarizer angle of a single detector uniquely identifies the polarization angle of both particles.

Now we obviously can't detect the relative angle between 'individual' particle spin and polarizer setting, but if spin is real it is there, and apparently affects detection sequence, though not overall detection rate. I can't say this is how it is but the information is there, without FTL. In fact, if this is true, it requires perfect determinacy to perfectly violate Bell's inequalities.
 
  • #412
DrChinese said:
Yes, I know it is pseudo-random. After a while you will realize that it does not need to be "truly" random. It is just a simulation to get things moving on how everything *should* work in this area. However, it might be worth you adding that element to see how it changes things. Thanks for the reference by the way, I will check it out.

You are welcome.

I think simulation of EPR is very interesting. It would be great to have an open "EPR framework" with real-time simulations + graphs + automatic validation of BI, which would allow for input of 'new ideas' for immediate testing, with minimal coding. Maybe a project for the future... if possible...
 
  • #413
I need to polish up my last argument and better outline its consequences. Until that post I only viewed it in the context of more complex physical constructs, but distilled down it's easier to see the bare consequences.

In essence, when we say we have a choice of detector setting we are overgeneralizing. In fact, if the realism assumptions are valid, the particle entering the detector itself defines a unique zero setting via the particles "real" polarization. The experimenter can only choose an offset from that polarization, and not specifically the offset relative to the distant detector. A violation of Bell's inequalities, in this view, entails a unique and separate perfectly determined detection sequence for each detector offset relative the particle polarization. Likely quantized offsets to get such perfect experimental results. The inverse of this perfectly determined sequence can be repeated IIF the distant detector chooses the same offset relative to a distant, but perfectly anticorrelated particle. Not having prior knowledge of determinates, polarization, etc., we can only see it in the coincidences of a pair of otherwise random sequences.
 
  • #414
my_wan said:
I need to polish up my last argument and better outline its consequences. Until that post I only viewed it in the context of more complex physical constructs, but distilled down it's easier to see the bare consequences.

In essence, when we say we have a choice of detector setting we are overgeneralizing. In fact, if the realism assumptions are valid, the particle entering the detector itself defines a unique zero setting via the particles "real" polarization. The experimenter can only choose an offset from that polarization, and not specifically the offset relative to the distant detector. A violation of Bell's inequalities, in this view, entails a unique and separate perfectly determined detection sequence for each detector offset relative the particle polarization. Likely quantized offsets to get such perfect experimental results. The inverse of this perfectly determined sequence can be repeated IIF the distant detector chooses the same offset relative to a distant, but perfectly anticorrelated particle. Not having prior knowledge of determinates, polarization, etc., we can only see it in the coincidences of a pair of otherwise random sequences.

OK, you are really going off the deep end now. :smile: (And I mean that in a nice way.)

Everything you are saying has been refuted a zillion times already. I can demonstrate it either by theory or by experiment, pick your poison. But first, like ThomasT, you will need to show me something! I can't refute NOTHING!

Walk me through some examples. Provide me a dataset. If you want, I will make it easy and you can talk through the perfect (EPR) correlation cases first before moving on to the Bell cases (like 0/120/240 I always mention).

And by the way, I will make a little prediction: when we are done, I will have proven your example wrong. But you won't change your opinion because you will say that there is an example that proves you right, you just haven't found it yet.

So if you are going to follow this line, you can just say so now and save us both time. The question comes down to: are you asking or are you telling? Because I'm *telling* you that your thinking does NOT follow from the facts. I mean you might want to consider this little tidbit before you go much further: photons can be entangled that have NEVER existed within the same light cone. How do you propose to explain that? That certainly would have turned Einstein's head.
 
Last edited:
  • #415
DevilsAvocado said:
You are welcome.

I think simulation of EPR is very interesting. It would be great to have an open "EPR framework" with real-time simulations + graphs + automatic validation of BI, which would allow for input of 'new ideas' for immediate testing, with minimal coding. Maybe a project for the future... if possible...

For those acquainted with c-sharp, I have the same de Raedt simulation, but converted in an object oriented way (this allows a clear separation between the objects(particles and filters) used in the simulation).
But an open framework should probably be started in something like http://maxima.sourceforge.net/" .
 
Last edited by a moderator:
  • #416
ajw1 said:
For those acquainted with c-sharp, I have the same de Raedt simulation, but converted in an object oriented way (this allows a clear separation between the objects(particles and filters) used in the simulation).

Yes, it still takes a little thought for the coder. I wanted to have something that clearly related to the original De Raedt model, so that there would be little question that my program did the job.

The issue is to make sure that there is nothing happening in the code that: a) has the detectors considered when the particles are prepared initially; or b) has particle 1/detector 1 mixed with particle 2/detector 2 in any way.

I know you are aware of this, I am saying this for the benefit of others who may be reading.
 
  • #417
I see your site is back up DrC. :smile: I'll go over it soon.

DrChinese said:
OK, you are really going off the deep end now. (And I mean that in a nice way.)

Everything you are saying has been refuted a zillion times already. I can demonstrate it either by theory or by experiment, pick your poison. But first, like ThomasT, you will need to show me something! I can't refute NOTHING!

Walk me through some examples. Provide me a dataset. If you want, I will make it easy and you can talk through the perfect (EPR) correlation cases first before moving on to the Bell cases (like 0/120/240 I always mention).

And by the way, I will make a little prediction: when we are done, I will have proven your example wrong. But you won't change your opinion because you will say that there is an example that proves you right, you just haven't found it yet.

So if you are going to follow this line, you can just say so now and save us both time. The question comes down to: are you asking or are you telling? Because I'm *telling* you that your thinking does NOT follow from the facts. I mean you might want to consider this little tidbit before you go much further: photons can be entangled that have NEVER existed within the same light cone. How do you propose to explain that? That certainly would have turned Einstein's head.
Ok, you may have a point, but I'd like to see it. I hope you deliver, I'm arguing in the hopes of learning something new. I have a preference for experiment in empirical matters but without ignoring theory, as theory is what is at issue here. As for whether I'm asking or telling: Neither. I'm taking a position to be debated to sharpen the articulation of the controversial points. The example I'll go through is from 0 to 45, and explain how counterfactual reasoning can be interpreted in those discrepancies. In particular, when you say on your website:
[PLAIN said:
http://www.drchinese.com/David/Bell_Theorem_Easy_Math.htm]Yet[/PLAIN] according to EPR, an element of reality exists independent of the act of observation. I.E. all elements of reality have definite values at all times, EVEN IF WE DON'T KNOW THEIR VALUES.
When you say, A i.e. B, I agree with A but will argue B implies properties that don't necessarily follow from A. I think it was the above interpretation you placed on the "realism" I used in my prior post.

Consider the following detection rates:
00 = 1
50 = 0.985
100 = 0.940
150 = 0.867
200 = 0.767
250 = 0.643
300 = 0.5
350 = 0.342
400 = 0.174
450 = 0
This pattern inversely after every 45 degrees.

To show the discrepancy with realism as defined, let's consider a set of string detections where any common setting of detector pairs matches this (rounded). [0] is a 'coincidence' non-detection and [1] is a 'coincidence' detection.
00 = [1111111111] (100% coincidences)
50 = [1111111111]
100 = [1111111110]
150 = [1111111110]
200 = [1111111100]
250 = [1111111000]
300 = [1111100000]
350 = [1111000000]
400 = [1100000000]
450 = [0000000000]

Now if we pick a pair of arbitrary angles 100 and 400 we get:
100 = [1111111110]
Diff 300 = 0.5 (empirical) :: 0.766 if reality match (falsified) -> [1111100000] verses [1111111100]:200 = 0.767
400 = [1100000000]

Now what went wrong with realism here? Note that the strings represent coincidences, not detections. Furthermore, for any given detection potentially an arbitrarily large, perhaps infinite, number individual states, vectors, etc., went into defining that detection. Thus when looking at "coincidences", not detections, we can't automatically presume that the detections that define the coincidences between 100 and 400 are the same coincidences in detections between 00 and 300. Yet the 'reality' condition being imposed presumes only a single coincidence pattern can be involved in a given coincidences rate. Thus each coincidence profile would have a distinct detection and coincidence profile for each particle and relative angle of detector, which can only be repeated on a twin to the degree that the relative detector angle matches the original relative detector angle, as define relative to the polarization of that particle.

In principle, you can take each coincidence term in [1111111111...], [], ..., at each angle, expand each term [1] to contain its own coincidence profile with the other coincidence elements for each variation of angle. Then repeat for those coincidence elements. This would diverge rather quickly, but presumably converge quickly with a measurement for the same reason. I can't prove this, but in some sense if you want to take Hilbert space and wavefunctions seriously as real requires taking infinities pretty seriously. as in actual infinities.

Am I convinced by this? Perhaps on Mondays, Wednesdays, and Fridays, but it is as reasonable as anything else proposed, and I've seen no argument to escape it. Even if it flies in the face of indeterminism in principle, it doesn't even in principle allow an escape in practice, EPR notwithstanding. This mutual dependence on individual 'real' particle properties verses detector settings, and the resulting variation in specific detections verses coincidences is how relational interpretations escape EPR while maintaining realism in the event sets that define them.

The key point here is that the specific detection pattern of a series of particles at one angle can't be the same detection pattern at another angle, cross setting counterfactual assumptions are presumptuous with or without realism. Thus the coincidences between two pair of detector patterns and settings is even further removed from counterfactual claims from alternative settings. Yet the "realism" as defined by impossibility claims requires coincidences from random sequence pairs to counterfactually match entirely different coincidences in entirely different random sequences as a proxy for "realness" in values. The summation of events that define the outcome can nonetheless be real, so long as you don't require a summation of them in one physical configuration, defined my the detector settings, to match the summation of the same events with another set of detector settings. It would be analogous to saying mass, space, time, etc., can't be real because observer measure it differently in different circumstances.

About your "prediction" (I hope so):
Hopefully my point is fairly clear now, I hope you can offer more, because this is where I'm stuck atm. To tell me I have to explain it isn't realistic as the alternative hasn't explained anything either. To say I will not change my mind presumes I have made up my mind, but so long as a fundamental weakness exist in FTL claims through counterfactual reasoning, and reasonable arguments exist that justify invalidating counterfactual reasoning even if in realism based toy models, I'll be stuck with uncertainty. Yes, counterfactual reasoning is a 'fundamental' weakness of Bell's theorem et al. Not to mention the trouble it creates for realism based FTL theories. My position will remain a mere choice, which I can only hope helps lead me forward in some way, unless you can deliver.
 
Last edited by a moderator:
  • #418
my_wan said:
1. I see your site is back up DrC. :smile: I'll go over it soon...

2. When you say, A i.e. B, I agree with A but will argue B implies properties that don't necessarily follow from A. I think it was the above interpretation you placed on the "realism" I used in my prior post.

Consider the following detection rates:
00 = 1
50 = 0.985
100 = 0.940
150 = 0.867
200 = 0.767
250 = 0.643
300 = 0.5
350 = 0.342
400 = 0.174
450 = 0
This pattern inversely after every 45 degrees.

To show the discrepancy with realism as defined, let's consider a set of string detections where any common setting of detector pairs matches this (rounded). [0] is a 'coincidence' non-detection and [1] is a 'coincidence' detection.
00 = [1111111111] (100% coincidences)
50 = [1111111111]
100 = [1111111110]
150 = [1111111110]
200 = [1111111100]
250 = [1111111000]
300 = [1111100000]
350 = [1111000000]
400 = [1100000000]
450 = [0000000000]

Now if we pick a pair of arbitrary angles 100 and 400 we get:
100 = [1111111110]
Diff 300 = 0.5 (empirical) :: 0.766 if reality match (falsified) -> [1111100000] verses [1111111100]:200 = 0.767
400 = [1100000000]

Now what went wrong with realism here? Note that the strings represent coincidences, not detections. Furthermore, for any given detection potentially an arbitrarily large, perhaps infinite, number individual states, vectors, etc., went into defining that detection. Thus when looking at "coincidences", not detections, we can't automatically presume that the detections that define the coincidences between 100 and 400 are the same coincidences in detections between 00 and 300. Yet the 'reality' condition being imposed presumes only a single coincidence pattern can be involved in a given coincidences rate. Thus each coincidence profile would have a distinct detection and coincidence profile for each particle and relative angle of detector, which can only be repeated on a twin to the degree that the relative detector angle matches the original relative detector angle, as define relative to the polarization of that particle.

1. Yes...!

2. OK, now we are getting somewhere. But you have already jumped a few places too far here, and so we need to go back a step or two.

a. EPR defines realism as being the ability to predict the outcome in advance. That is a separate criteria from the Bell test itself, and something which is assumed to be true. In other words, if we have a Bell state, we have perfect correlations. If we have perfect correlations, then there is an element of reality. If we have elements of reality at all angles, then there must be beginning values which were predetermined IF realism applies. Do you follow this argument? This is straight from EPR. Bell too. So if you agree on this definition of realism, we can apply it in your example.

b. To apply to your example: we cannot simply say: the correlations are [1111100000] or whatever. We need to specify the Alice values and the Bob values, as well as values for Chris. Later, during the test, we will make a separate selection of which pair (2 of the 3) we will actually pick. Then we calc the coincidences. If you agree with this, then we can proceed to the next steps.

And I do agree that the set of coincidences for 0 and 30 degrees is different than for 10 and 40 degrees. They have no causal connection to each other. I am glad you see that point. We will return to it later I suspect. For purposes of our example, don't worry about randomizing the results: just get values that work correctly when we ulitmately do look at coincidences. We will likely need 12 items instead of 10 in order to make the example arithmetic work out. Which is seeing that there is 3/12 coincidences per QM vs. 4/12 for local realistic at my example 0/120/240 example. By the way, that is also the same as 0/30/60 degrees so we only need your 30 degree value to work everything out (since the perfect correlations are always 100%). Simple, eh?

Also we need to agree about what a coincidence is. I call it a coincidence if there is a match. With no deduction for non-matches. Your formula seems to deduct for non-matches, which is confusing to me. Can we use the terms such that Match=Coincidence? That way, the coincidence rate at 45 degree is 50%. Actually, I don't entirely follow your labeling about detections vs. coincidences. There are always detections in our ideal example.
 
Last edited:
  • #419
And just to be clear: We need observer Chris (i.e. 3 sets of values) because the open question is: Does the choice of observation (i.e. which 2 observers are selected out of 3) affect the outcome? You are arguing that it cannot (assuming the observers are spacelike separated). I say it does matter, that you cannot arrive at the QM predictions otherwise.
 
  • #420
Yes in my string notation I ignored random detections not attributable to a causal mechanism per the "reality" postulate, and reordered them in nonrandom sequences. Trivial to simply randomize 00, change the remaining values accordingly. I did this to grab the main content in a nonrandom handwritten way to directly compare what was considered "real" about the coincidences. I'm a little strapped for time atm, but your issue with detections vs. coincidences is something that needs worked out. My string notation can't really have helped considering your version. I need to reformulate something you are more familiar with. You used a third observer, where I simply compared one pair of coincidences at one set of detector settings to different pair rather than a third observer.

I'll be back later, hopefully with a third person version, and also reiterate my earlier issues with realism as defined in paragraph a. Also again why EPR used it knowing its limitations. Your right we should take it piece by piece.
 
  • #421
my_wan said:
Yes in my string notation I ignored random detections not attributable to a causal mechanism per the "reality" postulate, and reordered them in nonrandom sequences. Trivial to simply randomize 00, change the remaining values accordingly. I did this to grab the main content in a nonrandom handwritten way to directly compare what was considered "real" about the coincidences. I'm a little strapped for time atm, but your issue with detections vs. coincidences is something that needs worked out. My string notation can't really have helped considering your version. I need to reformulate something you are more familiar with. You used a third observer, where I simply compared one pair of coincidences at one set of detector settings to different pair rather than a third observer.

I'll be back later, hopefully with a third person version, and also reiterate my earlier issues with realism as defined in paragraph a. Also again why EPR used it knowing its limitations. Your right we should take it piece by piece.

No problem on the time issue.

I use notation similar to yours in my examples when I am working things out. So try this for a dataset for 0/120/240:

Alice = [111111111111]
Bob = [000000001111]
Chris = [001100110000]

Notice that between any pair of observers, there are 4 out of 12 coincidences. That is 33%, the bottom limit for a local realistic theory. QM makes the predictions that there will be a coincidence rate of 25%.

============================================================

See why I ask about datasets? If Bob's reality depends on whether Alice or Chris is the other observer, then you can have the correct relationships. But if Bob is blind to that, the relationships don't hold.
 
  • #422
DrChinese said:
It is simple. If you have a model, you can general a dataset. Give me the values for 0/120/240 degrees for Alice and Bob, using the formula from the paper. If you think it is the same as QM, then fine, show me. P.S. QM does NOT NOT NOT say there is a realistic dataset.

I will then tear your dataset to shreds. Now, quit saying it is unnecessary when it is. I can say that I witnessed my son walking on water, but you would want to see it yourself. Well, here am I, saying I want to see it. Bell would too. :biggrin:

Just like 1 is not 2, a claim of equivalence is NOT equivalence.
This is absurd. You speak in riddles, and it's becoming clear to me that you don't understand the issues or the arguments being presented.

An equation expressing the expectation value of the joint probability isn't enough -- you want a dataset.

I know that you know enough physics to be able to ascertain if the joint probability in the paper matches the qm joint probability for the experimental situation.

So why don't you just do that, and then we can continue the discussion.
 
  • #423
IcedEcliptic said:
I give up.
One can only hope.
 
  • #424
Now, just to drive home the point I am making in my earlier post:

a. If I change the dataset to get the "right" answer between Alice and Bob:

Alice = [111111111111]
Bob = [000000000111]
Chris = [001100110000]

Then AliceBob yields 25% but BobChris is now 42% (5/12). But that doesn't work, because as we mentioned earlier, the ratio must hold between any pair of angles where the theta is the same.

b. If you want to see the QM dataset that most closely represents the "complete" reality of the test:

Alice = [111111111111]
Bob = [000000000111]

There is no Chris. Sorry Chris, you're outta here!
 
  • #425
ThomasT said:
So why don't you just do that, and then we can continue the discussion.

Forget it. You're the one out on the limb with your non-standard viewpoint. I can't prove the unprovable.

There is a formula, yes, I can read that. But it is not a local realistic candidate and there is no way to generate a dataset. He can't do it, and obviously neither can you. :bugeye:

Folks, we have another local realist claiming victory after demonstrating... ABSOLUTELY NOTHING. AGAIN.
 
  • #426
Note: The way I ended this post shocked me, it wasn't planned, but I think I'll leave it as is. I have numbers to run.

I'm going to cover two issues in this post. The second, your 3 party detection system, has a physical equivalence to a QM effect in a rather classic series of three polarizers. This lead me to consider an experiment that would falsify my detection rate verses coincidence objection.

Issue #1
The first is the notion of realism again. If I'm required to maintain a strict adherence to Bell Realism as you have defined it, there simply is no way around it in the Bell's theorem context. Yet I find this restriction unwarranted, even in the context of EPR. The EPR paper stated: "A comprehensive definition of reality is, however, unnecessary for our purpose". What was defined was given on the limited grounds of "sufficiency" as needed for the EPR case provided in the paper. Yet even to this definition is was said: "Regarded not as necessary...". Yet even with these equivocations I think the paper failed to appreciate the richness in the way measured variables can vary in relation to the states that define them. Consider the words of Schneider:
http://www.drchinese.com/David/Hume's_Determinism_Refuted.htm
[PLAIN said:
http://www.drchinese.com/David/Hume's_Determinism_Refuted.htm]A[/PLAIN] review of the problem shows that we cannot, in principle, ever observe an independent variable. For it to be identified unambiguously as being independent, such variable can have no causal connection to other observables. (If there is any causal connection to another variable, then the cause cannot be narrowed to the hypothetical independent variable.) If it has no causal connection to other observables, then it cannot be observed! For all intents and purposes, it would not be part of the observable universe.

How would such an independent variable, which only intermittently maintained causal connections to other observables, fare in in this notion of Bell Realism? It would certainly rule out determinism and Bell Realism in the empirical arena, yet still be entirely feasible in principle in the theoretical arena. It wouldn't be any more unwarranted than any mathematical postulate. You've, to my understanding, stated contextual variables are not real, real variables have "simultaneous definite answers", etc., yet I can't even be sure any such measurable variable exist. Planck's constant probably being the most difficult to contextualize, though some have tried.

Let's start with this question to articulate this issue: Is the following 3 variables contextual, real, or both; space, time, and mass?

Issue #2
I'll start with how to falsify my detection rate verse coincidence objection, and perhaps this will provide the meaning. Your 3 party EPR correlation is essentially equivalent to a textbook example of a set of 3 polarizers in series. When 2 of them are put in series, set at 90 degrees from each other, no light will pass through both of them. Yet place a third polarizer placed between them, set at 45 degrees to the other 2, then 12.5% of the randomly polarized light will pass through all 3 of them, even though none could make it through just 2 at the same settings. Now we're going to do a version of your 3 party correlation test with photon polarization, except measure detection rates (intensity variance), rather than coincidences, and in a parallel rather than series. I'm personally not so concerned with absolute intensity or large separations to rule out local mechanisms, prior empirical data well satisfies me in that regard.

Place an emitter at the point of origin which emits polarized photons some distance to a pair of detectors on the + and -x axis. The output of the emitter will be constant over time for reference. The initial orientation of the polarizers at the detectors will match the polarization of the emitted photons, and the detection rate (intensity), not coincidences, are measured. Now this is like photons passing through a series pair of polarizers with a common polarization settings. In the series when you rotate one polarizer the light intensity through the 2 is reduced. Question: When you rotate polarizer A in the parallel setup will it induce a change detection rates (intensity) crossing polarizer B, like in the series arrangement? If so, my detection rate verse coincidence objection is busted. If not, this counterfactually entails that a change in polarization settings involves changing the actual individual particles involved in making correlation comparison. Thus couterfactual assumptions are empirically voided even before coincidence counts takes place.

Counterfactual assumptions have another problem in the properties of polarizers, as the 3 polarizer textbook example illustrates. When we measure the polarization of a photon, any photon that has a polarization near enough to the polarizer setting has some chance of being detected as having a polarization equal to that polarizer setting, which it does thereafter because the polarizer set it. Thus, when we talk about measuring polarization, we are actually, in most cases, resetting the polarization of that subset of particles which have properties close enough to be successful. Yet we call this a detection of a property that we just reset to that value ourselves. The amazing thing is that, when you adjust polarizer A out of alignment with B, you change the properties of the photons passing through it. Yet when you change B, to put it back in line with A, you change the properties of those photons in exactly the same way and sequence that A initially changed the properties of the photons at the other end, recreating the correlations. Does that crack the deterministic interpretation? It even provides the specific macroscopic context, polarizers resetting particle properties, which defines the context of the so called contextual variables. That would mean coincidence statistics are dependent on common spatial polarization, and the polarizers are simply resetting, not strictly measuring, that polarization, preferentially those nearest the polarizer setting. :bugeye:
 
Last edited by a moderator:
  • #427
Funny thing is it doesn't appear to make any difference whether the range at which a the polarizer detects a particle is modeled as indeterminacy or an actual range. The resulting behavior is the same, at least in this case, and appears to provides a local means for the coincidence to exceed classical variance.
 
  • #428
my_wan said:
1. The first is the notion of realism again. If I'm required to maintain a strict adherence to Bell Realism as you have defined it, there simply is no way around it in the Bell's theorem context. Yet I find this restriction unwarranted, even in the context of EPR. The EPR paper stated: "A comprehensive definition of reality is, however, unnecessary for our purpose". What was defined was given on the limited grounds of "sufficiency" as needed for the EPR case provided in the paper. Yet even to this definition is was said: "Regarded not as necessary...". Yet even with these equivocations I think the paper failed to appreciate the richness in the way measured variables can vary in relation to the states that define them. Consider the words of Schneider:
http://www.drchinese.com/David/Hume's_Determinism_Refuted.htm

2. "Originally Posted by http://www.drchinese.com/David/Hume's_Determinism_Refuted.htm
A review of the problem shows that we cannot, in principle, ever observe an independent variable. For it to be identified unambiguously as being independent, such variable can have no causal connection to other observables. (If there is any causal connection to another variable, then the cause cannot be narrowed to the hypothetical independent variable.) If it has no causal connection to other observables, then it cannot be observed! For all intents and purposes, it would not be part of the observable universe."

How would such an independent variable, which only intermittently maintained causal connections to other observables, fare in in this notion of Bell Realism? It would certainly rule out determinism and Bell Realism in the empirical arena, yet still be entirely feasible in principle in the theoretical arena. It wouldn't be any more unwarranted than any mathematical postulate. You've, to my understanding, stated contextual variables are not real, real variables have "simultaneous definite answers", etc., yet I can't even be sure any such measurable variable exist. Planck's constant probably being the most difficult to contextualize, though some have tried.

1. Thanks for acknowledging my point, makes the discussion a lot easier. Yes, you are required to maintain a strict adherence to Bell Realism. :smile: What other point would there be to any meaningful definition of realism than to have a definition and adhere to it? If Bell shows Local Realism is at odds with QM, then we need to know what Local and Realism mean. EPR lays that out, and your quote misses the mark. They said there are elements of reality, and they ask at the end of their paper if they are simultaneous. Bell takes that to the next step.

So no, you cannot come up with a Bell realistic dataset using a local realistic theory. QED.

2. Nice quote, I don't think I could do better. :wink:
 
  • #429
my_wan said:
Funny thing is it doesn't appear to make any difference whether the range at which a the polarizer detects a particle is modeled as indeterminacy or an actual range. The resulting behavior is the same, at least in this case, and appears to provides a local means for the coincidence to exceed classical variance.

That conclusion does NOT follow. It shows non-locality could exist. Perhaps MUST exist.
 
  • #430
my_wan said:
... That would mean coincidence statistics are dependent on common spatial polarization, and the polarizers are simply resetting, not strictly measuring, that polarization, preferentially those nearest the polarizer setting. :bugeye:

But is this really strange...?

We have the QM probability distributions (HUP) for particles:
450px-Standard_deviation_diagram.svg.png

And at 45° we have probability of 50% for spin up/spin down (top of the sine above), meaning the correlation is 0, or perfectly random, or equal to what LHVT can reproduce.
600px-Bells-thm.png

At 22.5° we get a QM correlation of 0.71 which LHVT cannot compete with...

(And maybe that’s exactly what you are saying in next post :wink:)
 
Last edited:
  • #431
my_wan said:
... That would mean coincidence statistics are dependent on common spatial polarization, and the polarizers are simply resetting, not strictly measuring, that polarization, preferentially those nearest the polarizer setting. :bugeye:

Not so fast! That was the entire point of the "elements of reality"!

We know that the polarizer is measuring and NOT resetting. How? We can perform the test on Alice, and use that result to predict Bob. If we can predict Bob with certainty, without changing Bob in any way prior to Bob's observation, then the Bob result is "real". Bell real.
 
  • #432
DrChinese said:
... We can perform the test on Alice, and use that result to predict Bob. If we can predict Bob with certainty, without changing Bob in any way prior to Bob's observation, then the Bob result is "real". Bell real.

Not for a single pair of entangled photons, right?
 
  • #433
DevilsAvocado said:
Not for a single pair of entangled photons, right?

Sure. Just not all angles simultaneously. But I can do for any specified angle. So the conclusion is that there must be an element of reality - per EPR - to that value.

Now keep in mind that within QM, entangled photons are in fact connected in that they are part of a shared wave state. But within local realism, there is no ongoing state called "entangled". In LR, entangled particles are more like a matched pair of socks. So there is the difference.
 
  • #434
Dr. Chinese, I have been thinking of your "Frankenstein" entanglement, and I wonder if you could show the relationship with a data-set such as the one you just provided? I have read the papers you linked to, in this and other threads, and it seems that there is some indirect experimental evidence of this. Does this in any way reinforce some version of Cramer's TCI? Modify collapse from being atemporal, and map to decoherence instead and we no longer require "spooky" means or extra dimensions to explain entangled pairs.
 
  • #435
DrChinese said:
Sure. Just not all angles simultaneously. But I can do for any specified angle. So the conclusion is that there must be an element of reality - per EPR - to that value.


DrC, if you could explain this to me I would be most thankful. Let’s keep it simple (safest for me :smile:):
  • We arrange the polarizer’s so that Alice = 0° and Bob = 22.5° (fixed).

  • We always measure Alice first, and then Bob (by putting them at different distances from S).

  • We send 100 entangled pairs of photons.

  • According to QM predictions we will get a correlation of 0.71.

  • This means we will get 71 correlated pairs (+, +) and 29 non-correlated pairs (+, -).

  • Alice at 0° will always measure 100 (+), and Bob at 22.5° will measure 71 (+) and 29 (-).

To me this means that we cannot be certain about one single outcome at Bob, only say that the probability for a single correlated pair (+, +) is 71%... what did I miss??
 
Last edited:
  • #436
DevilsAvocado said:
DrC, if you could explain this to me I would be most thankful. Let’s keep it simple (safest for me :smile:):
  • We arrange the polarizer’s so that Alice = 0° and Bob = 22.5° (fixed).

  • We always measure Alice first, and then Bob (by putting them at different distances from S).

  • We send 100 entangled pairs of photons.

  • According to QM predictions we will get a correlation of 0.71.

  • This means we will get 71 correlated pairs (+, +) and 31 non-correlated pairs (+, -).

  • Alice at 0° will always measure 100 (+), and Bob at 22.5° will measure 71 (+) and 31 (-).

To me this means that we cannot be certain about one single outcome at Bob, only say that the probability for a single correlated pair (+, +) is 71%... what did I miss??

Here is how it works: I measure Alice at 0 degrees. The result is a +. With that information, I can predict the result of a mesaurement of Bob at 0 degrees. I make the prediction without first disturbing Bob. Therefore (according to EPR, Bell and most others) there is an element of reality to Bob's polarization at 0 degrees.

Similarly: I measure Alice at X degrees. The result is a +. With that information, I can predict the result of a mesaurement of Bob at X degrees. I make the prediction without first disturbing Bob. There is an element of reality to Bob's polarization at X degrees.

So there is no question that regardless of what angle I measure Bob, I can predict the result in advance with certainty. The implication, if you are a realist, is that the result of that observation must have been predetermined. How could it not be? At least, that is what a realist would assume. If the outcome was predetermined, then the effect of the measurement apparatus is not really an issue. After all, you get the same answer for Alice and Bob so whatever effect it has is a wash.

Of course, all of this is the EPR line.
 
  • #437
I thought this might be interesting for this thread:

https://www.youtube.com/watch?v=
<object width="480" height="385"><param name="movie" value="http://www.youtube.com/v/yADcuGPphkY&hl=de_DE&fs=1&"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/yADcuGPphkY&hl=de_DE&fs=1&" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="480" height="385"></embed></object>

http://www.cleoconference.org/about_cleo/Archives/2009/plenary2009.aspx
A talk given by Alain Aspect (including youtube video and pdf file of his presentation).
Abstract: Bell’s theorem has drawn physicists’ attention onto the revolutionary character of entanglement. Based on that concept, a new field has emerged, quantum information, where one uses entanglement between qubits to develop conceptually new methods for processing and transmitting information.
 
Last edited by a moderator:
  • #438
(:redface: OMG I have just proved I can’t count to 100: 71 + 31 = 102! How can you ever take me serious after this!? :smile: EDIT!)
DrChinese said:
Here is how it works: I measure Alice at 0 degrees. The result is a +. With that information, I can predict the result of a mesaurement of Bob at 0 degrees. I make the prediction without first disturbing Bob. Therefore (according to EPR, Bell and most others) there is an element of reality to Bob's polarization at 0 degrees.

Yes agree, absolutely no problem. (And this also works for LHV...)

DrChinese said:
Similarly: I measure Alice at X degrees. The result is a +. With that information, I can predict the result of a mesaurement of Bob at X degrees. I make the prediction without first disturbing Bob. There is an element of reality to Bob's polarization at X degrees.

This is where I get problem...

DrChinese said:
Of course, all of this is the EPR line.

Ahh, this is explains it! This is not according to Bell, right?

If it is Bell, then I get problem with this:

Instead of 100, we send 10 pairs (this makes it a lot easier for me to count! :biggrin:) and round correlation to 0.7 at 22.5°. Then one possible 'sequence' could look like this:
[11111 11111] = Alice
[00011 11111] = Bob

And another possible sequence could look like this:
[11111 11111] = Alice
[11100 01111] = Bob

And this:
[11111 11111] = Alice
[11111 10001] = Bob

And this:
[11111 11111] = Alice
[01010 11111] = Bob

And this:
[11111 11111] = Alice
[11111 01010] = Bob

And so on and so forth.

Now if you went to a bookmaker to make a bet on Bob, you can’t possibly be 100% sure to get your money back, when betting on the correct entangled pair sequence, could you??


---------------------------------------------------------------------------------
Footnote: DrC, your "Frankenstein" particle is cool, but what about Craig Venter?
Who today enounce the world’s first synthetic life form! AHHHHHH
Here’s the http://a.blip.tv/scripts/flash/show.../blip.tv/?utm_source=brandlink&enablejs=true".
 
Last edited by a moderator:
  • #439
DevilsAvocado said:
Ahh, this is explains it! This is not according to Bell, right?

If it is Bell, then I get problem with this:

Instead...

The elements of reality is the part that EPR and Bell agree on. This is the so called perfect correlations. To have a Bell state, in a Bell test, you must have these. The disagreement is whether these represent SIMULTANEOUS elements of reality. EPR thought they must, in fact thought that was the only reasonable view. But Bell realized that this imposed an important restriction on things.

In fact, that is the restriction that my_wan objects to. But that is part and parcel of Bell. In fact, it makes absolutely NO difference to Bell whether you think it is a reasonable requirement or not. The proof and the conclusion remain the same: IF you assume Bell realism (which is simply the simultaneous existence of individual EPR elements of reality), THEN QM will yield incompatible predictions. That is the Bell result.

So I am not sure by now if I have veered off from your question. :smile: So re-ask it if needed. And if I am repeating myself repeating myself repeating myself just say so.

Remember: Bell is holding the perfect correlations in his hand when he starts down the road for the 3 different settings path (a, b, c).
 
  • #440
DrChinese said:
So I am not sure by now if I have veered off from your question. :smile: So re-ask it if needed.

Well... I think I do... :smile:

There’s a lot to talk about, realism etc, and I’ll dive into that when 'basics' are 'secured and in place'. So, the original question was:
DrChinese said:
If we can predict Bob with certainty, without changing Bob in any way prior to Bob's observation, then the Bob result is "real". Bell real.

And my 'objection' to that was: "Not for a single pair of entangled photons, right?"

Why I reacted was that my understanding of Bell’s contribution to EPR was to include probability theory into EPR, and probabilities never function well on only one event = "a single pair of entangled photons"...

There are absolutely no questions that in case of Alice 0° & Bob 0° we can, with certainty, predict Bob when only measuring Alice. And this could also Einstein, Podolsky, and Rosen achieve with their "agreement" or hidden variable.

Now, I think we both agree that at Alice 0° & Bob 22.5°, we have a QM prediction of a 0.71 correlation, right?

And I think, most certainly, that we both agree that "a single pair of entangled photons" cannot produce the number 0.71, right? (i.e. spin < ±1)

And here comes the 'tricky question'!

If I’m about to send 10 entangled pairs to Alice 0° & Bob 22.5°, the first photon for Alice would be:
[1] = Alice

Now, what is your prediction for Bob, and how certain can you be on that?? :wink:

(Remember, I’m going follow this up with 9 exactly the same questions! :biggrin:)
 
  • #441
DrChinese said:
The elements of reality is the part that EPR and Bell agree on. This is the so called perfect correlations.
And I have repeatedly pointed out that contextual variables are allowed, and that if I'm required to maintain strict adherence to EPR/Bell definition, used only by EPR for operational sufficiency to that restricted argument, I can't argue realism. Originally I could only argue that these contextual variables, beyond what your allowing with the strict Bell's realism restrictions (measurement=absolute property) , could validly be considered. It's like demanding that because a coin lands tails-up I must talk of 'tails-up' as a counterfactual absolute property.

Yet I still had an empirical problem with how a contextual variable would work locally. We already knew with perfect anti-correlations that the initial state prior to measurement needed no FTL mechanism to explain the correlation. Even the local polarizer setting a particle came in contact with was locally known, but the nature of 'arbitrary' detector settings remained enigmatic. This was even a problem for FTL realistic mechanisms.

Now we have an empirically identifiable effect, with unknown but locally definable mode of operation, to contextualize that variable. All the "information" needed to supply these correlations can strictly defined locally, with or without FTL mechanisms.

DrChinese said:
Not so fast! That was the entire point of the "elements of reality"!

We know that the polarizer is measuring and NOT resetting. How? We can perform the test on Alice, and use that result to predict Bob. If we can predict Bob with certainty, without changing Bob in any way prior to Bob's observation, then the Bob result is "real". Bell real.
Is 'tails-up' an "element of reality" of a coin? It is IIF tails is up. If we know another coin, by conservation law, is perfectly anti-correlated, we know its "element of reality" is 'heads-up', without FTL mechanisms.

DevilsAvocado called the statistics situation perfectly. So what about this "measuring and NOT resetting"? This directly contradicts what is observed when 2 polarizers set at 90 degrees passes no light, but when a 3rd, set at 45 degrees to those 2, is placed between them light can then pass. Now I'm also well aware of the HUP version of this, but the results are the same, whether the uncertainty is real or a product of our state of knowledge. Like the momentum of an individual air molecule when we know the temperature.

In the HUP version the polarizer grid acts as measurement like squeezing light:

Except for polarizations rather than positions (conjugates). Which again empirically changes the very properties that Bell's Realism requires us to label innate. We can't predict the angle at which an air molecule escapes a hole in a compressed air tank either. Thus the notion that the properties are unchanged by polarizers is untenable whether were talking classical or QM.

Yet, even if a polarizer doesn't change the polarization of a photon, but merely allows a range of photon polarizations to pass, the 'group' statistics play out consistently. If you demand a singular property to be singularly defined by a polarizer, the notion that half of all randomly polarized light has that one singular property is patently ridiculous. Yet this 50% is precisely what empirically happens when we measure randomly polarized light with a single polarizer. Thus the realism=absolute unique property demands placed on EPR by Bell's Realism is empirically falsified by a single polarizer measuring randomly polarized light, without any correlation effects at any distance whatsoever.
 
Last edited by a moderator:
  • #442
Single question version:
If polarization is an absolute observer independent state of a particle, which a polarizer (is presumed to) uniquely identifies, why does a polarizer identify 50% of all randomly polarized light as having that one singular polarization?
 
  • #443
We'll call the above "single question version" Bell's realism paradox (BRP). Once we allow, and we empirically must, this range of values to be included in a polarizer measurement, then it must induce more coincidences than singular values can account for.

This coincidence overcount can be replicated by two polarizers measuring randomly polarized light. Since 50% of all randomly polarized light passes through a polarizer either at 0 or 90 degrees, accounting for 100% of the light, Bell's realism requires us to assume all randomly polarized light must have a polarization of either 0 or 90 degrees. This then results in a paradox (BRP), when we consider arbitrary polarizer setting, other than 0 and 90 degrees.

Thus the empirical stature of Bell's theorem begs the question of how/why individual polarizers overcount the particles at a given polarization, but removes the locality issue from it. Of course HUP works just fine to mathematically describe this single polarizer overcount. But that 'appears' to restore the original objection EPR posed wrt indeterminacy lacking a mechanism, which, except for constraints imposed by conservation law, doesn't allow statistically significant correlations.
 
Last edited:
  • #444
Another perspective:
Empirical proof invalidates the counterfactual assumption on which Bell's Theorem depends:

The polarization of a randomly polarized beam of light is measured at 3 angles: A=0°, B=45°, and C=90°. Since A and C add up to 100%, we cannot counterfactually maintain that none of the 50% detected at B wouldn't also have been detected at A and/or C, had that measurement been performed instead, without more than 100% of the photons involved. The same conservation laws which demand EPR correlations forbids it.

This is a dead solid proof invalidating the counterfactual assumption of Bell's theorem.
 
  • #445
DevilsAvocado said:
There are absolutely no questions that in case of Alice 0° & Bob 0° we can, with certainty, predict Bob when only measuring Alice. And this could also Einstein, Podolsky, and Rosen achieve with their "agreement" or hidden variable.

Now, I think we both agree that at Alice 0° & Bob 22.5°, we have a QM prediction of a 0.71 correlation, right?

I have cos^2(22.5) as 85.36%, although I don't think the value matters for your example. I think you are calculating cos^2 - sin^2 - matches less non-matches - to get your rate, which yields a range of +1 to -1. I always calc based on matches, yielding a range from 0 to 1. Both are correct.

:biggrin:
 
  • #446
my_wan said:
Another perspective:
Empirical proof invalidates the counterfactual assumption on which Bell's Theorem depends:

The polarization of a randomly polarized beam of light is measured at 3 angles: A=0°, B=45°, and C=90°. Since A and C add up to 100%, we cannot counterfactually maintain that none of the 50% detected at B wouldn't also have been detected at A and/or C, had that measurement been performed instead, without more than 100% of the photons involved. The same conservation laws which demand EPR correlations forbids it.

This is a dead solid proof invalidating the counterfactual assumption of Bell's theorem.

Most certainly is not. Has nothing to do with Bell: "The same conservation laws which demand EPR correlations forbids it.". You are arguing against QM. QM does not need to be correct for Bell to be correct.

It would be helpful if you could explain your example using datasets. Then it would be unambiguous. As it is, I am having a bit of difficulty following your point. It sounds as if you are saying that light cannot change its polarization due to conservation issues.

If you have a Local Realistic theory in which this example is correctly modeled, while QM does not, let's see it. It should be obvious that your example either applies - or does not apply - equally to QM and your (still invisible) LR theory. ON THE OTHER HAND: Bell points out an important different between QM and all LR candidate theories. This difference is generally accepted. If you don't want to accept this difference as being a defining point for an LR theory, then fine, you don't accept it. But don't expect others versed in the language of science to accept your definition of day as night, either.
 
  • #447
my_wan said:
Single question version:
If polarization is an absolute observer independent state of a particle, which a polarizer (is presumed to) uniquely identifies, why does a polarizer identify 50% of all randomly polarized light as having that one singular polarization?

Who says this?

I certainly don't. We live in an observer dependent universe. At least, that's my interpretation.
 
  • #448
DrChinese said:
Most certainly is not. Has nothing to do with Bell: "The same conservation laws which demand EPR correlations forbids it.". You are arguing against QM. QM does not need to be correct for Bell to be correct.

It would be helpful if you could explain your example using datasets. Then it would be unambiguous. As it is, I am having a bit of difficulty following your point. It sounds as if you are saying that light cannot change its polarization due to conservation issues.

If you have a Local Realistic theory in which this example is correctly modeled, while QM does not, let's see it. It should be obvious that your example either applies - or does not apply - equally to QM and your (still invisible) LR theory. ON THE OTHER HAND: Bell points out an important different between QM and all LR candidate theories. This difference is generally accepted. If you don't want to accept this difference as being a defining point for an LR theory, then fine, you don't accept it. But don't expect others versed in the language of science to accept your definition of day as night, either.
I provided the data set:
A randomly polarized beam of light, with 3 measurements.
1) Polarizer measures 50% photons at 0°.
2) Polarizer measures 50% photons at 45°.
3) Polarizer measures 50% photons at 90°.
Therefore anyone measurement must include photons from one or more of the other measurements, per conservation law.

This is not a proof for or against any hvt, nor does it disprove Bell's theorem alone. It does, as a matter of fact, invalidate the counterfactual reasoning used to apply Bell's theorem to the locality issue of EPR.

No, I am not making any argument, whatsoever, against the empirical validity of QM. I am depending on that validity, along with conservation, to make the proof claim.
 
  • #449
my_wan said:
1. And I have repeatedly pointed out that contextual variables are allowed, and that if I'm required to maintain strict adherence to EPR/Bell definition, used only by EPR for operational sufficiency to that restricted argument, I can't argue realism. Originally I could only argue that these contextual variables, beyond what your allowing with the strict Bell's realism restrictions (measurement=absolute property) , could validly be considered. It's like demanding that because a coin lands tails-up I must talk of 'tails-up' as a counterfactual absolute property.

2. Yet I still had an empirical problem with how a contextual variable would work locally.

1. Yes, if you are a local realist, you must acknowledge that a 100% certain tails up prediction indicates an element of reality. It does NOT say that tails up itself is an element of reality. That is simply a measuring rod. But there must be SOME element of reality somewhere or else you wouldn't get the certain result.

2. This is a good question!

Because I accept Bell, I know the world is either non-local or contextual (or both). If it is non-local, then there can be communication at a distance between Alice and Bob. When Alice is measured, she sends a message to Bob indicating the nature of the measurement, and Bob changes appropriately. Or something like that, the point is if non-local action is possible then we can build a mechanism presumably which explains entanglement results.

But what if the world is contextual instead of non-local? How would I answer your question then?

The answer is: I don't know. It is merely a logical requirement of Bell that contextuality is a possiblity. I don't know the mechanism.

Now, there are a number of interpretations which are non-realistic (i.e. contextual) but are fully local: Many Worlds (MWI) and Relational BlockWorld (RBW) come to mind as being explicitly local. The point is, QM is silent as to mechanisms. There is only the formalism. Yet the fact is: there is nothing specific missing from QM that we can be sure exists at this time.
 
  • #450
my_wan said:
I provided the data set:
A randomly polarized beam of light, with 3 measurements.
1) Polarizer measures 50% photons at 0°.
2) Polarizer measures 50% photons at 45°.
3) Polarizer measures 50% photons at 90°.
Therefore anyone measurement must include photons from one or more of the other measurements, per conservation law.

This is not a proof for or against any hvt, nor does it disprove Bell's theorem alone. It does, as a matter of fact, invalidate the counterfactual reasoning used to apply Bell's theorem to the locality issue of EPR.

No, I am not making any argument, whatsoever, against the empirical validity of QM. I am depending on that validity, along with conservation, to make the proof claim.

1) 2) 3) are formulas, not datasets. I guess you are saying something like:

1) HHHH
2) HTTH
3) TTTT

But I guess I am missing it because that works fine. You said there is a contradiction. Where is it?
 

Similar threads

Replies
45
Views
3K
Replies
4
Views
1K
Replies
18
Views
3K
Replies
6
Views
2K
Replies
2
Views
2K
Replies
100
Views
10K
Back
Top