Is action at a distance possible as envisaged by the EPR Paradox.

Click For Summary
The discussion centers on the possibility of action at a distance as proposed by the EPR Paradox, with participants debating the implications of quantum entanglement. It is established that while entanglement has been experimentally demonstrated, it does not allow for faster-than-light communication or signaling. The conversation touches on various interpretations of quantum mechanics, including the Bohmian view and many-worlds interpretation, while emphasizing that Bell's theorem suggests no local hidden variables can account for quantum predictions. Participants express a mix of curiosity and skepticism regarding the implications of these findings, acknowledging the complexities and ongoing debates in the field. Overall, the conversation highlights the intricate relationship between quantum mechanics and the concept of nonlocality.
  • #991
Here's a particular case were the fair sampling of full Universe objection may not be valid, in the thread:
https://www.physicsforums.com/showthread.php?t=369286"
DrChinese said:
Strangely, and despite the fact that it "shouldn't" work, the results magically appeared. Keep in mind that this is for the "Unfair Sample" case - i.e. where there is a subset of the full universe. I tried for 100,000 iterations. With this coding, the full universe for both setups - entangled and unentangled - was Product State. That part almost makes sense, in fact I think it is the most reasonable point for a full universe! What doesn't make sense is the fact that you get Perfect Correlations when you have random unknown polarizations, but get Product State (less than perfect) when you have fixed polarization. That seems impossible.

However, by the rules of the simulation, it works.

Now, does this mean it is possible to violate Bell? Definitely not, and they don't claim to. What they claim is that a biased (what I call Unfair) sample can violate Bell even though the full universe does not. This particular point has not been in contention as far as I know, although I don't think anyone else has actually worked out such a model. So I think it is great work just for them to get to this point.

Here "unfair sampling" was equated with a failure to violate BI, while the "full universe" was invoked to differentiate between BI and the and a violation of BI. Yet, as I demonstrated in https://www.physicsforums.com/showthread.php?p=2788956#post2788956", the BI violations of QM, on average of all setting, does not contain a "full universe" BI violation.

Let's look at a more specific objection, to see why the "fair sampling" objection may not valid:
DrChinese said:
After examining this statement, I believe I can find an explanation of how the computer algorithm manages to produce its results. It helps to know exactly how the bias must work. :smile: The De Raedt et al model uses the time window as a method of varying which events are detected (because that is how their fair sampling algorithm works). That means, the time delay function must be - on the average - such that events at some angle settings are more likely to be included, and events at other angle setting are on average less likely to be included.

Here it was presented 'as if' event detections failures represented a failure to detect photons. This is absolutely not the case. The detection accuracy, of photons, remained constant throughout. Only the time window in which they were detected varied, meaning there was no missing detections, only a variation of whether said detections fell within a coincidence window or not. Thus the perfectly valid objection to using variations in detection efficiency (unfair sampling) does not apply to all versions of unfair sampling. The proof provided in https://www.physicsforums.com/showthread.php?p=2788956#post2788956" tells us QM BI violations are not "full universe" BI violation either.
 
Last edited by a moderator:
Physics news on Phys.org
  • #992
DevilsAvocado said:
The "fair sampling assumption" is also called the "no-enhancement assumption", and I think that is a much better term. Why should we assume that nature has an unknown "enhancement" mechanism that filter out those photons, and only those, who would give us a completely different experimental result!?

Wouldn’t that be an even stranger "phenomena" than nonlocality?:bugeye:?

And the same logic goes for "closing all loopholes at once". Why nature should chose to expose different weaknesses in different experiments? That is closed separately??

It doesn’t make sense.

That depends on what you mean by "enhancement". If by "enhancement" you mean that a summation of all possible or "full universe" choice of measurements settings leads to an excess of detection events, then yes, I would agree. But the point of post #988 was that the BI violations defined by QM, and measured, do not "enhance" detection totals over the classical limit when averaged over the "full universe" of detector settings.

That is that for ever detector setting choice which exceeds the classical coincidence limit, there provably exist another choice where coincidences fall below classical coincidence limit, by the exact same amount.

22.5 and 67.5 is one pair such that cos^2(22.5) + cos^2(67.5) = 1. These detection variances are such that there exist an exact one to one ratio between overcount angles and quantitatively identical undercount angles, such that averaged over all possible setting QM and the classical coincidence limits exactly match.
 
  • #993
To make the difference between an experimentally invalid "unfair sampling" argument, involving detection efficiencies, and more general "fair sampling" arguments more clear, consider:

You have a single pair of photons. They are both detected within a time window, thus a coincidence occurs. Now suppose you chose different settings and detected both photons, but they didn't fall within the coincidence window. Now in both cases you had a 100% detection rate, so "fair sampling", defined in terms of detections efficiencies, is absolutely invalid. Yet, assuming the case defined holds, this was a "fair sampling" argument that did not involve detection efficiencies, and can not be ruled out by perfectly valid arguments against "fair sampling" involving detection efficiencies.
 
  • #994
There's a comparison I'd like to make between the validity of BI violations applied to realism and the validity of objections to fair sampling arguments.

When I claim that the implications of BI are valid but are often overgeneralized, the exact same thing happened, in which the demonstrable invalidity of "unfair sampling", involving detection efficiencies, is overgeneralized to improperly invalidate all "fair sampling" arguments.

The point here is that you are treading in dangerous territory when you attempt to apply a proof involving a class instance to make claims about an entire class. Doing so technically invalidates the claim, whether you are talking about the "fair sampling" class or the "realism" class. Class instances by definition contains constraints not shared by the entire class, and the set of all instances of a class remains undefined within science.

Of course you can try and object to my refutation of the invalidity of "fair sampling" when such "fair sampling" doesn't involve less than perfect detection efficiencies. :biggrin:
 
  • #995
my_wan said:
There's a comparison I'd like to make between the validity of BI violations applied to realism and the validity of objections to fair sampling arguments.

When I claim that the implications of BI are valid but are often overgeneralized, the exact same thing happened, in which the demonstrable invalidity of "unfair sampling", involving detection efficiencies, is overgeneralized to improperly invalidate all "fair sampling" arguments.

The point here is that you are treading in dangerous territory when you attempt to apply a proof involving a class instance to make claims about an entire class. Doing so technically invalidates the claim, whether you are talking about the "fair sampling" class or the "realism" class. Class instances by definition contains constraints not shared by the entire class, and the set of all instances of a class remains undefined within science.

Of course you can try and object to my refutation of the invalidity of "fair sampling" when such "fair sampling" doesn't involve less than perfect detection efficiencies. :biggrin:


Dear my_wan;

This is very interesting to me. I would love to see some expansion on the points you are advancing, especially about this:

Class instances by definition contains constraints not shared by the entire class, and the set of all instances of a class remains undefined within science.

Many thanks,

JenniT
 
  • #996
You can find examples in the set theory.
This is a tricky subject closely connected to the Axiom of Choice (if I understood the idea correctly).

For example, you can write any real number, given you as input. However, as power of continuum is higher than the power of integers, there are infinitely many real numbers, which can't be given and an example. You can even provide a set of real numbers, defined in a tricky way so you can't give any examples of the numbers, belonging to that set, even that set covers [0,1] almost everywhere and it has infinite number of members!

Imagine: set of rational numbers. For example, 1/3
Set of transendent numbers, for example pi.
Magic set I provide: no example can be given

It becomes even worse when some properties belong exclusively to that 'magic' set. See Banach-Tarski paradox as an example. No example of that weird splitting can be provided (because if one could do it then the theorem could be proven without AC)
 
  • #997
my_wan said:
... Here it was presented 'as if' event detections failures represented a failure to detect photons. This is absolutely not the case. The detection accuracy, of photons, remained constant throughout. Only the time window in which they were detected varied, meaning there was no missing detections, only a variation of whether said detections fell within a coincidence window or not. Thus the perfectly valid objection to using variations in detection efficiency (unfair sampling) does not apply to all versions of unfair sampling. The proof provided in https://www.physicsforums.com/showthread.php?p=2788956#post2788956" tells us QM BI violations are not "full universe" BI violation either.

Have seen the code?

In the case of the De Raedt Simulation there is no "time window", only a pseudo-random number in r0:

6oztpt.png


I don’t think this has much to do with real experiments – this is a case of trial & error and "fine-tuning".

One thing that I find 'peculiar' is the case that the angles of the of the detectors are not independently random, angle1 is random but angle2 is always at fixed value offset...?:confused:?

To me this does not look like the "real thing"...

Code:
' Initialize the detector settings used for all trials for this particular run - essentially what detector settings are used for "Alice" (angle1) and "Bob" (angle2)
If InitialAngle = -1 Then
  angle1 = Rnd() * Pi ' set as being a random value
  Else
  angle1 = InitialAngle ' if caller specifies a value
  End If
angle2 = angle1 + Radians(Theta) ' fixed value offset always
angle3 = angle1 + Radians(FixedOffsetForChris) ' a hypothetical 3rd setting "Chris" with fixed offset from setting for particle 1, this does not affect the model/function results in any way - it is only used for Event by Event detail trial analysis

...

For i = 1 To Iterations:

  If InitialAngle = -2 Then ' SPECIAL CASE: if the function is called with -2 for InitialAngle then the Alice/Bob/Chris observation settings are randomly re-oriented for each individual trial iteration.
    angle1 = Rnd() * Pi ' set as being a random value
    angle2 = angle1 + Radians(Theta) ' fixed value offset always
    angle3 = angle1 + Radians(FixedOffsetForChris) ' a hypothetical 3rd setting "Chris" with fixed offset from setting for particle 1, this does not affect the model/function results in any way - it is only used for Event by Event detail trial analysis
    End If

...
 
Last edited by a moderator:
  • #998
Dmitry67 said:
You can find examples in the set theory.
This is a tricky subject closely connected to the Axiom of Choice (if I understood the idea correctly).

For example, you can write any real number, given you as input. However, as power of continuum is higher than the power of integers, there are infinitely many real numbers, which can't be given AS an example. You can even provide a set of real numbers, defined in a tricky way so you can't give any examples of the numbers, belonging to that set, even IF that set covers [0,1] almost everywhere and it has infinite number of members!

Imagine: set of rational numbers. For example, 1/3
Set of transendent numbers, for example pi.
Magic set I provide: no example can be given

It becomes even worse when some properties belong exclusively to that 'magic' set. See Banach-Tarski paradox as an example. No example of that weird splitting can be provided (because if one could do it then the theorem could be proven without AC)

Dear Dmitry67, many thanks for quick reply. I put 2 small edits in CAPS above.

Hope that's correct?

But I do not understand your "imagine" ++ example.

Elaboration in due course would be nice.

Thank you,

JenniT
 
  • #999
my_wan said:
That depends on what you mean by "enhancement".
It means exactly the same as "fair sampling assumption": That the sample of detected pairs is representative of the pairs emitted.

I.e. we are not assuming that nature is really a tricky bastard, by constantly not showing us the "enhancements" that would spoil all EPR-Bell experiments, all the time. :biggrin:

my_wan said:
the "full universe" of detector settings.
What does this really mean??

my_wan said:
That is that for ever detector setting choice which exceeds the classical coincidence limit, there provably exist another choice where coincidences fall below classical coincidence limit, by the exact same amount.

22.5 and 67.5 is one pair such that cos^2(22.5) + cos^2(67.5) = 1. These detection variances are such that there exist an exact one to one ratio between overcount angles and quantitatively identical undercount angles, such that averaged over all possible setting QM and the classical coincidence limits exactly match.

my_wan, no offence – but is this the "full universe" of detector settings?:bugeye:?

I don’t get this. What on Earth has cos^2(22.5) + cos^2(67.5) = 1 to do with the "fair sampling assumption"...?

Do you mean that we are constantly missing photons that would, if they were measured, always set correlation probability to 1?? I don’t get it...
 
  • #1,000
my_wan said:
... You have a single pair of photons. They are both detected within a time window, thus a coincidence occurs. Now suppose you chose different settings and detected both photons, but they didn't fall within the coincidence window. Now in both cases you had a 100% detection rate, so "fair sampling", defined in terms of detections efficiencies, is absolutely invalid. Yet, assuming the case defined holds, this was a "fair sampling" argument that did not involve detection efficiencies, and can not be ruled out by perfectly valid arguments against "fair sampling" involving detection efficiencies.

I could be wrong (as last time when promising a Nobel o:)). But to my understanding, the question of "fair sampling" is mainly a question of assuming – even if we only have 1% detection efficiency – that the sample we do get is representative of all the pairs emitted.

To me, this is as natural as when you grab hand of white sand on a white beach, you don’t assume that every grain of sand that you didn’t get into your hand... is actually black! :wink:
 
  • #1,001
DevilsAvocado said:
Have seen the code?

In the case of the De Raedt Simulation there is no "time window", only a pseudo-random number in r0:
I'm in the process of reviewing De Raedt's work. I'm not convinced of his argument, the physical interpretation is quiet a bit more complex. Even made the observation:
[PLAIN]http://arxiv.org/abs/1006.1728 said:
The[/PLAIN] EBCM is entirely classical in the sense that it uses concepts of the macroscopic world and makes no reference to quantum theory but is nonclassical in the sense that it does not rely on the rules of classical Newtonian dynamics.

My point does not depend on any such model, working or not, or even whether or not the claim itself was ultimately valid. I argued against the validity only of the argument itself, not it's claims. My point was limited to the over-generalization of interpreting the obvious invalidity of a "fair sampling" involving detection efficiencies to all "fair sampling" arguments that assumes nothing less than perfect detection efficiencies.

In this way, my argument is not dependent of De Raedt's work at all, and only came into play as an example involving DrC's rebuttal which inappropriately generalized "fair sampling" as invalid, on the basis that a class instance of "fair sampling" that assumes insufficient detection efficiencies is invalid.

DevilsAvocado said:
I don’t think this has much to do with real experiments – this is a case of trial & error and "fine-tuning".

One thing that I find 'peculiar' is the case that the angles of the of the detectors are not independently random, angle1 is random but angle2 is always at fixed value offset...?:confused:?

To me this does not look like the "real thing"...
Yes, it appears to suffer in the same way my own attempts did, but I haven't actually got that far yet in the review. If the correspondence holds, then he accomplished algebraically what I did with a quasi-random distribution of a bit field. However, when you say angle2 is always at fixed value offset, what is it always offset relative to? You can spin the photon source emitter without effect, so it's not a fixed value offset relative to the source emitter. It's not a fixed value offset relative to the the other detector. In fact, the fixed offset is relative to an arbitrary non-physical coordinate choice, which itself can be arbitrarily chosen.

I still need a better argument to fully justify this non-commutativity between arbitrary coordinate choices, but the non-commutativity of classical vector products may pay a role.

Again, my latest argument is not predicated on De Raedt's work or any claim that BI violations can or can't be classically modeled. My argument was limited to, and only to, the use of applying the invalidity of an "unfair sampling" involving limited detection efficiencies to "unfair sampling" not involving any such limits in detection efficiencies. It's a limit of what can be claimed as a proof, and involves no statements about how nature is.
 
Last edited by a moderator:
  • #1,002
JenniT said:
Dear my_wan;

This is very interesting to me. I would love to see some expansion on the points you are advancing, especially about this:

Class instances by definition contains constraints not shared by the entire class, and the set of all instances of a class remains undefined within science.

Many thanks,

JenniT
I give an example in post #993, when I described two different "fair sampling" arguments. One involving variations in detection statistics, the other involving variations involving detection timing. The point was not that either is a valid explanation of BI violations, the point was that proving the first instance is invalid in the EPR context does not rule out the second instance. Yet they are both members of the same class called "fair sampling" arguments. This was only an example, not a claim of a resolution to BI violations.

Dmitry67 said:
You can find examples in the set theory.
This is a tricky subject closely connected to the Axiom of Choice (if I understood the idea correctly).
Yes! I personally think it likely you have made fundamental connection that gets a bit deeper than what I could do more than hint at in the context of the present debate. :biggrin:
 
  • #1,003
my_wan said:
DevilsAvocado said:
Have seen the code?

In the case of the De Raedt Simulation there is no "time window", only a pseudo-random number in r0:
I'm in the process of reviewing De Raedt's work. I'm not convinced of his argument, the physical interpretation is quiet a bit more complex.

In this way, my argument is not dependent of De Raedt's work at all, and only came into play as an example involving DrC's rebuttal which inappropriately generalized "fair sampling" as invalid, on the basis that a class instance of "fair sampling" that assumes insufficient detection efficiencies is invalid.

The De Raedt simulation is an attempt to demonstrate that there exists an algorithm whereby (Un)Fair Sampling leads to a violation of a BI - as observed - while the full universe does not (as required by Bell). They only claim that their hypothesis is "plausible" and do not really claim it as a physical model. A physical model based on their hypothesis would be falsifiable. Loosely, their idea is that a photon might be delayed going through the apparatus and the delay might depend on physical factors. Whether you find this farfetched or not is not critical to the success of their simulation. The idea is that it is "possible".

My point has been simply that it is not at all certain that a simulation like that of De Raedt can be successfully constructed. So that is what I am looking at. I believe that there are severe constraints and I would like to see these spelled out and documented. Clearly, the constraint of the Entangled State / Product State mentioned in the other thread is a tought one. But as of this minute, I would say they have passed the test.

At any rate, they acknowledge that Bell applies. They do not assert that the full universe violates a BI.
 
  • #1,004
DevilsAvocado said:
It means exactly the same as "fair sampling assumption": That the sample of detected pairs is representative of the pairs emitted.
Yes, the "fair sampling assumption" does assume the sample of detected pairs is representative of the pairs emitted, and assuming otherwise is incongruent with the experimental constraints, thus invalid. An alternative "fair sampling assumption" assumes that the time taken to register a detection is the same regardless of the detector offsets. The invalidity of the first "fair sampling assumption" does not invalidate the second "fair sampling assumption". It's doesn't prove it's valid either, but neither is the claim that the invalidity of the first example invalidates the second.

DevilsAvocado said:
I.e. we are not assuming that nature is really a tricky bastard, by constantly not showing us the "enhancements" that would spoil all EPR-Bell experiments, all the time. :biggrin:
Again, tricky how. We know it's tricky in some sense. Consider the event timing verses event detection rates in the above example. If you bounce a tennis ball off the wall, its return time is dependent on the angle it hits the wall in front of you. It's path length is also dependent on the angle it hits the wall. Is nature being "tricky" doing this? Is nature being "tricky" if it takes longer to detect a photon passing a polarizer at an angle, than it takes if the polarizer has a common, or more nearly common, polarization as the photon? I wouldn't call that "tricky", any more than a 2 piece pyramid puzzle is. In years only 1 person I met, that hadn't seen it before, was able to solve it without help.
http://www.puzzle-factory.com/pyramid-2pc.html
We already know the speed of light is different in mediums with a different index of refraction.

DevilsAvocado said:
What does this really mean??
This was in reference to "full universe". DrC and I did use it in a slightly different sense. DrC used it to mean mean any possible 'set of' detector settings. I used it to mean 'all possible' detector settings. I'll explain the consequences in more detail below.

DevilsAvocado said:
my_wan, no offence – but is this the "full universe" of detector settings?:bugeye:?

I don’t get this. What on Earth has cos^2(22.5) + cos^2(67.5) = 1 to do with the "fair sampling assumption"...?

Do you mean that we are constantly missing photons that would, if they were measured, always set correlation probability to 1?? I don’t get it...
No, there are NO photon detections missing! Refer back to post #993. The only difference is in how fast the detection occurs, yet even this is an example, not a claim. If 2 photons hit 2 different detectors at the same time, but one of them takes longer to register the detection, then they will not appear correlated because they appeared to occur at 2 separate times. Not one of the detections is missing, only delayed.

Ok, here's the "full universe" argument again, in more detail.
The classical limit, as defined, sets a maximum correlation rate for any given setting offset. QM predicts, and experiments support, that for the offsets between 0 and 45 degrees the maximum classical limit is exceeded. QM also predicts that, for the angles between 45 and 90 degrees, the QM correlations are less than the classical limit. This is repeated on every 90 degree segment. If you add up all the extra correlations between 0 and 45 degrees, that exceed the classical limit, and add it to the missing correlations between 45 and 90 degrees, that the classical limit allows, you end up with ZERO extra correlations. Repeat for the other 3 90 degree segments and 4 x 0 = 0. QM does not predict any extra correlations when you average over all possible settings. It only allows you to choose certain limited non-random settings where the classical limit is exceeded, which presents problems for classical models.
 
Last edited:
  • #1,005
DrC,
Please note that my argument has nothing to do with the De Raedt simulation. It was merely an example of overextending the lack of validity of a fair sampling argument involving limited detection efficiencies to fair sampling arguments that could remain valid even if detection efficiencies were always absolutely perfect.
 
  • #1,006
DrC, two questions,
1) Do you agree that "fair sampling" assumptions exist, irrespectively of validity, that does not involve the assumption that photon detection efficiencies are less than perfect?
2) Do you agree that averaged over all possible settings, not just a choice some subset of settings, that the QM and classical correlation limit leads to the same overall total number of detections?
 
  • #1,007
my_wan said:
However, when you say angle2 is always at fixed value offset, what is it always offset relative to?

angle2 = angle1 + Radians(Theta) ' fixed value offset always


And Theta is a (user) argument into the main function.

my_wan said:
Again, tricky how.
DevilsAvocado said:
To me, this is as natural as when you grab hand of white sand on a white beach, you don’t assume that every grain of sand that you didn’t get into your hand... is actually black! :wink:


my_wan said:
No, there are NO photon detections missing! Refer back to post #993. The only difference is in how fast the detection occurs, yet even this is an example, not a claim. If 2 photons hit 2 different detectors at the same time, but one of them takes longer to register the detection, then they will not appear correlated because they appeared to occur at 2 separate times. Not one of the detections is missing, only delayed.

Ahh! Now I get it! Thanks for explaining. My guess on this specific case, is that it’s very easy to change the detection window (normally 4-6 ns?) to look for dramatic changes... and I guess that in all of the thousands EPR-Bell experiments, this must have been done at least once...? Maybe DrC knows?

my_wan said:
Ok, here's the "full universe" argument again, in more detail.
The classical limit, as defined, sets a maximum correlation rate for any given setting offset. QM predicts, and experiments support, that for the offsets between 0 and 45 degrees the maximum classical limit is exceeded. QM also predicts that, for the angles between 45 and 90 degrees, the QM correlations are less than the classical limit. This is repeated on every 90 degree segment.

Okay, you are talking about this curve, right?

2wr1cgm.jpg
 
  • #1,008
my_wan said:
If you add up all the extra correlations between 0 and 45 degrees, that exceed the classical limit, and add it to the missing correlations between 45 and 90 degrees, that the classical limit allows, you end up with ZERO extra correlations.

You could see it this way. You could also see it as the very tricky nature then has to be wobbling between "increasing/decreasing" unfair sampling, which to me makes the argument for fair sampling even stronger...
 
  • #1,009
DA, nice recent (long) post, #985. Sorry for the delay in replying. I've been busy with holiday activities. Anyway, I see that there have been some replies to (and amendents or revisions by you of) your post. I've lost count of how many times I've changed my mind on how to approach understanding both Bell and entanglement correlations. One consideration involves the proper interpretation of Bell's work and results wrt LHV or LR models of entanglement. Another consideration involves the grounds for assuming nonlocality in nature. And yet another consideration involves approaches to understanding how light might be behaving in optical Bell tests to produce the observed correlations, without assuming nonlocality. The latter involves quantum optics. Unfortunately, qo doesn't elucidate instrument-independent photon behavior (ie., what's going on between emission and filtration/detection). So, there's some room for speculation there (not that there's any way of definitively knowing whether a proposed, and viable, 'realistic' model of 'interim' photon behavior corresponds to reality). In connection with this, JenniT is developing an LR model in the thread on Bell's mathematics, and Qubix has provided a link to a proposed LR model by Joy Christian.

Anyway, it isn't like these are easy question/considerations.

Here's a paper that I'm reading which you might be interested in:

http://arxiv.org/PS_cache/arxiv/pdf/0706/0706.2097v2.pdf

And here's an article in the Stanford Encyclopedia of Philosophy on the EPR argument:

http://plato.stanford.edu/entries/qt-epr/#1.2

Pay special attention to Einstein on locality/separability, because it has implications regarding why Bell's LHV ansatz might be simply an incorrect model of the experimental situation rather than implying nonlocality in nature.

Wrt to your exercises illustrating the difficulty of understanding the optical Bell test correlations in terms of specific polarization vectors -- yes, that is a problem. It's something that probably most, or maybe all, of the readers of this thread have worked through. It suggests a few possibilities: (1) the usual notion/'understanding' of polarization is incorrect or not a comprehensive physical description, (2) the usual notion/'understanding' of spin is incorrect or not a comprehensive physical description, (3) the concepts are being misapplied or inadequately/incorrectly modeled, (4) the experimental situation is being incorrectly modeled, (5) the dynamics of the reality underlying instrumental behavior is significantly different from our sensory reality/experience, (6) there is no reality underlying instrumental behavior or underlying our sensory reality/experience, etc., etc. My current personal favorites are (3) and (4), but, of course, that could change. Wrt fundamental physics, while there's room for speculation, one still has to base any speculations on well established physical laws and dynamical principles which are, necessarily, based on real physical evidence (ie. instrumental behavior, and our sensory experience, our sensory apprehension of 'reality' -- involving, and evolving according to, the scientific method of understanding).

And now, since I have nothing else to do for a while, I'll reply to a few of your statements. Keep a sense of humor, because I feel like being sarcastic.

DevilsAvocado said:
ThomasT, I see you and billschnieder spend hundreds of posts in trying to disprove Bell's (2) with various farfetched arguments, believing that if Bell's (2) can be proven wrong – then Bell's Theorem and all other work done by Bell will go down the drain, including nonlocality.
My current opinion is that Bell's proof of the nonviability of his LHV model of entanglement doesn't warrant the assumption of nonlocality. Why? Because, imo, Bell's (2) doesn't correctly model the experimental situation. This is what billschnieder and others have shown, afaict. There are several conceptually different ways to approach this, and so there are several conceptually different ways of showing this, and several conceptually different proposed, and viable, LR, or at least Local Deterministic, models of entanglement.

If any of these approaches is eventually accepted as more or less correct, then, yes, that will obviate the assumption of nonlocality, but, no, that will not flush all of Bell's work down the drain. Bell's work was pioneering, even if his LHV ansatz is eventually accepted as not general and therefore not implying nonlocality.

DevilsAvocado said:
The aim of the EPR paradox was to show that there was a preexisting reality at the microscopic QM level - that the QM particles indeed had a real value before any measurements were performed (thus disproving Heisenberg uncertainty principle HUP).

To make the EPR paper extremely short; If we know the momentum of a particle, then by measuring the position on a twin particle, we would know both momentum & position for a single QM particle - which according to HUP is impossible information, and thus Einstein had proven QM to be incomplete ("God does not play dice").
The papers I referenced above have something to say about this.

DevilsAvocado said:
Do you understand why we get upset when you and billschnieder argue the way you do?
Yes. Because you're a drama queen. But we're simply presenting and analyzing and evaluating ideas. There should be no drama related to that. Just like there's no crying in baseball. Ok?

DevilsAvocado said:
You are urging PF users to read cranky papers - while you & billschnieder obviously hasn’t read, or understand, the original Bell paper that this is all about??
I don't recall urging anyone to read cranky papers. If you're talking about Kracklauer, I haven't read all his papers yet, so I don't have any opinion as to their purported (by you) crankiness. But, what I have read so far isn't cranky. I think I did urge 'you' to read his papers, which would seem to be necessary since you're the progenitor, afaik, of the idea that Kracklauer is a crank and a crazy person.

The position you've taken, and assertions you've made, regarding Kracklauer, put you in a precarious position. The bottom line is that the guy has some ideas that he's promoting. That's all. They're out there for anyone to read and criticize. Maybe he's wrong on some things. Maybe he's wrong on everything. So what? Afaict, so far, he's far more qualified than you to have ideas about and comment on this stuff. Maybe he's promoting his ideas too zealously for your taste or sensibility. Again, who cares? If you disagree with an argument or an idea, then refute it if you can.

As for billschnieder and myself reading Bell's papers, well of course we've read them. In fact, you'll find somewhere back in this thread where I had not understood a part of the Illustrations section, and said as much, and changed my assessment of what Bell was saying wrt it.

And of course it's possible, though not likely, that neither billschnieder nor I understand what Bell's original paper was all about. But I think it's much more likely that it's you who's missing some subleties wrt its interpretation. No offense of course.

Anyway, I appreciate your most recent lengthy post, and revisions, and most of your other posts, as genuine attempts by you to understand the issues at hand. I don't think that anybody fully understands them yet. So physicists and philosophers continue to discuss them. And insights into subtle problems with Bell's formulation, and interpretations thereof, continue to be presented, along with LR models of entanglement that have yet to be refuted.

Please read the stuff I linked to. It's written by bona fide respected physicists.

And, by the way, nice recent posts, but the possible experimental 'loopholes' (whether fair sampling/detection, or coincidence, or communication, or whatever) have nothing to do with evaluating the meaning of Bell's theorem. The correlation between the angular difference of the polarizers and coincidental detection must be, according to empirically established (and local) optical laws, a sinusoidal function, not a linear one.
 
  • #1,010
my_wan said:
To make the difference between an experimentally invalid "unfair sampling" argument, involving detection efficiencies, and more general "fair sampling" arguments more clear, consider:

You have a single pair of photons. They are both detected within a time window, thus a coincidence occurs. Now suppose you chose different settings and detected both photons, but they didn't fall within the coincidence window. Now in both cases you had a 100% detection rate, so "fair sampling", defined in terms of detections efficiencies, is absolutely invalid. Yet, assuming the case defined holds, this was a "fair sampling" argument that did not involve detection efficiencies, and can not be ruled out by perfectly valid arguments against "fair sampling" involving detection efficiencies.

I think it is a mistake to think that "unfair sampling" is only referring to detection rate. The CHSH inequality is the following:

|E(a,b) + E(a,b') + E(a',b) - E(a',b')| <= 2

It is true that in deriving this, Bell assumed every photon/particle was detected given that his A(.) and B(.) functions are defined as two-valued functions (+1, -1) rather than three-valued functions with a non-detection outcome included. An important point to note here is (1) there is a P(λ), implicit in each of the expectation value terms in that inequality, and Bell's derivation relies on the fact that P(λ) is exactly the same probability distribution for each and every term in that inequality.

Experimentally, not all photons are detected, so the "fair sampling assumption" together with "coincidence circuitry" is used to overcome that problem. Therefore the "fair sampling assumption" is invoked in addition to the coincidence counting to state that the detected coincident photons are representative of the full universe of photon pairs leaving the source.

The next important point to remember is this; (2) in real experiments each term in the inequality is a conditional expectation value, conditioned on "coincidence". The effective inequality being calculated in a real experiment is therefore:

|E(a,b|coinc) + E(a,b'|coinc) + E(a',b|coinc) - E(a',b'|coinc)| <= 2

So then looking at both crucial points above and remember the way experiments are actually performed we come to understand that "fair sampling assumption" entails the following:

1) P(coinc) MUST be independent of λ
2) P(coinc) MUST be independent of a and/or b (ie joined channel efficiencies must be factorizable)
3) P(λ) MUST be independent of a and/or b
4) If for any specific setting pair(a,b), the probability of "non-consideration" of a photon pair (ie, no coincidence) is dependent on the hidden parameter λ, then (1), (2) and (3) will fail, and together with them, the "fair sampling assumption" will fail.

The question then becomes, is it unreasonable to expect that for certain hidden λ, P(coinc) will not be the same in all 4 terms and therefore P(λ) can not be expected to always be the same for all 4 terms?

In fact (2) has been put to the test using real data from the Weihs et al experiment and failed. See the article here (http://arxiv4.library.cornell.edu/abs/quant-ph/0606122 , J. Phys. B 40 No 1 (2007) 131-141)
Abstract:
We analyze optical EPR experimental data performed by Weihs et
al. in Innsbruck 1997-1998. We show that for some linear combinations of the
raw coincidence rates, the experimental results display some anomalous behavior
that a more general source state (like non-maximally entangled state) cannot
straightforwardly account for. We attempt to explain these anomalies by taking
account of the relative efficiencies of the four channels. For this purpose, we use the fair
sampling assumption, and assume explicitly that the detection efficiencies for the pairs
of entangled photons can be written as a product of the two corresponding detection
efficiencies for the single photons. We show that this explicit use of fair sampling cannot
be maintained to be a reasonable assumption as it leads to an apparent violation of
the no-signalling principle.
 
Last edited by a moderator:
  • #1,011
Note that I am describing classes of realistic constructs, to demonstrate the absurdity of generalizing the refutation of a single class instance of a realism class to represent a refutation of realism in general. It goes to the lagitamacy of this generalization of realism, as defined by EPR, not to any given class or class instance described.

The most surprising result of such attempts at providing examples realism models that explicitly at odds with realism as defined by EPR, is I'm often paraphrased as requiring what these example model classes are explicitly formulated to reject. Namely: 1) That observables are representative indicators of elements of reality. 2) Real observables are linear representative indicators of such elements. 3) Properties are pre-existing (innate) to such elements. These are all presumptuous, but are diametrically opposed to realism as defined by EPR, thus such constructive elements of reality are not addressed by BI, with or without locality.

JesseM said:
But Bell's proof is abstract and mathematical, it doesn't depend on whether it is possible to simulate a given hidden variables theory computationally, so why does it matter what the "computational demands of modeling BI violations" are? I also don't understand your point about a transfinite set of hidden variables and Hilbert's Hotel paradox...do you think there is some specific step in the proof that depends on whether lambda stands for a finite or transfinite number of facts, or that would be called into question if we assumed it was transfinite?
I understand the mathematical abstraction BI is based on. It is because the mathematics is abstract that the consequent assumptions of the claims goes beyond validity of BI. Asher Peres notes that "element of reality" are identified with the EPR definition. He also notes the extra assumption that the sum or product of two commuting elements of reality also is an element of reality. In:
http://www.springerlink.com/content/g864674334074211/"
He outlines the algebraic contradiction that ensues from these assumptions. On what basis is these notions of realism predicated? If "elements of reality" exist, how justified are we in presuming that properties are innate to these elements?

Our own DrC has written some insightful comments concerning realism, refuting Hume, in which it was noted how independent variables must be unobservable. If all fundamental variables are in some sense independent, how do we get observables? My guess is that observables are a propagation of events, not things. Even the attempt to detect an "elements of reality" entails the creation of events, where what's detected is not the "elements of reality" but the propagation observables (event sets) created by the events, not the properties of "elements of reality".

Consider a classical analog involving laminar verses turbulent flow, and suppose you could only define density in terms of the event rates (collisions in classical terms) in the medium. The classical notion of particle density disappears. This is at a fundamental level roughly the basis of many different models, involving both GR, QM, and some for QG. Erik Verlinde is taking some jousting from his colleagues for a preprint along roughly similar lines.

The point here is that associating properties are something owned by things is absurdly naive, and even more naive to assume real properties are commutative representations of things (think back to the event rate example). This is also fundamentally what is meant by "statistically complete variables" in published literature.

Now you can object to it not being "realistic" on the basis of not identifying individual "elements of reality", but if the unobservability argument above is valid, on what grounds do you object to a theory that doesn't uniquely identify unobservables (independent elements of reality)? Is that justification for a claim of non-existence?

JesseM said:
I'm not sure what you mean by "projections from a space"...my definition of local realism above was defined in terms of points in our observable spacetime, if an event A outside the past light cone of event B can nevertheless have a causal effect on B then the theory is not local realist theory in our spacetime according to my definition, even if the values of variables at A and B are actually "projections" from a different unseen space where A is in the past light cone of B (is that something like what you meant?)
Consider a standard covariant transform in GR. A particular observers perspective is a "projection" of this curved space onto the Euclidean space our perceptions are predisposed to. Suppose we generalize this even further, to include the Born rule, |/psi|^2, such that a mapping of a set of points involves mapping them onto a powerset of points. Aside from the implications in set theory, this leads to non-commutativity even if the variables are commutative within the space that defines them. Would such a house of mirrors distortion of our observer perspective of what is commutative invalidate "realism", even when those same variables are commutative in the space that defined them?

Again, this merely points to the naivety of "realism" as has been invalidated by BI violations. What BI violations don't do is invalidate "realism", or refute that "elements of reality" exist that is imposing this house of mirrors effect on our observation of observables. Assuming we observe "reality" without effect on it is magical thinking from a realist perspective. Assuming we are a product of these variables, while assuming 'real' variables must remain commutative is as naive as the questions on this forum asking why doubling speed more than doubles the kinetic energy. But if your willing to just "shut up and calculate" it's never a problem.

JesseM said:
They did make the claim that there should in certain circumstances be multiple elements of reality corresponding to different possible measurements even when it is not operationally possible to measure them all simultaneously, didn't they?
Yes, but that is only a minimal extension to the point I'm trying to make, not a refutation of it. This corresponds to certain classical contextuality schemes attempted to model BI violations. The strongest evidence against certain types of contextuality schemes, from my perspective, involves metamaterials and other such effects, not BI violations. I think Einstein's assumptions of what constraints realism imposes is overly simplistic, but that doesn't justify the claim that "elements of reality" don't exist.

JesseM said:
I don't follow, what "definitions counter to that EPR provided" are being rejected out of hand?
Are you trying to say here that no "realism" is possible that doesn't accept "realism" as operationally defined by EPR? The very claim that BI violations refute "realism" tacitly makes this claim. If you predicate "realism" on the strongest possible realism, then the notion that a fundamental part has properties is tantamount to claiming it contains a magic spell. It would also entail that measuring without effect is telepathy, and at a fundamental level such an effect must be at least as big as what you want to measure. The Uncertainty Principle, as originally derived, was due to these very thought experiments involving realistic limits, not QM.

So as long as you insist that a local theory cannot be "realistic", even by stronger definitions of realism than EPR provided, then you are rejecting realism "definitions counter to that EPR provided". Have I not provided examples and justification for "realism" definitions that are counter to the EPR definition? Those examples are not claims of reality, they are examples illustrating the naivety of the constraints imposed on the notion of realism and justified on the EPR argument.

JesseM said:
What's the statement of mine you're saying "unless" to? I said "there's no need to assume ... you are simply measuring a pre-existing property which each particle has before measurement", not that this was an assumption I made. Did you misunderstand the structure of that sentence, or are you actually saying that if "observable are a linear projection from a space which has a non-linear mapping to our measured space of variables", then that would mean my statement is wrong and that there is a need to assume we are measuring pre-existing properties the particle has before measurement?
I said "unless" to "there's no need to assume, [...], you are simply measuring a pre-existing property". This was only an example, in which a "pre-existing property" does not exist, yet both properties and "elements of reality do. I give more detail on mappng issue with the Born rule above. These examples are ranges of possibilities that exist within certain theoretical class instances as well as in a range of theoretical classes. Yet somehow BI violations is supposed to trump every class and class instance and disprove realism if locality is maintained. I don't think so.

You got the paraphrase sort of right until you presumed I indicated, "is a need to assume we are measuring pre-existing properties the particle has before measurement". No, I'm saying the lack of pre-existing properties says nothing about the lack of pre-existing "elements of reality". Nor does properties dynamically generated by "elements of reality" a priori entail any sort of linearity between "elements of reality" and properties, at any level.

JesseM said:
Why would infinite or non-compressible physical facts be exceptions to that? Note that when I said "can be defined" I just meant that a coordinate-independent description would be theoretically possible, not that this description would involve a finite set of characters that could be written down in practice by a human. For example, there might be some local variable that could take any real number between 0 and 1 as a value, all I meant was that the value (known by God, say) wouldn't depend on a choice of coordinate system.
Why would infinite indicate non-compressible? If you define an infinite set of infinitesimals in a arbitrary region, why would that entail even a finite subset of that space is occupied? Even id a finite subset of that space was occupied, it still doesn't entail that it's a solid. Note my previous reference to Hilbert's paradox of the Grand Hotel. Absolute density wouldn't even have a meaning. Yes, a coordinate-independent description would be theoretically possible, yet commutativity can be dependent on a coordinate transform. You can make a gravitational field go away by the appropriate transform, but you can't make its effects on a given observers perspective go away. The diffeomorphism remains under any coordinate choice, and what appears linear in one coordinate choice may not be under another coordinate choice.

JesseM said:
As you rotate the direction of the beams, are you also rotating the positions of the detectors so that they always lie in the path of the beams and have the same relative angle between their orientation and the beam? If so this doesn't really seem physically equivalent to rotating the detectors, since their the relative angle between the detector orientation and the beam would change.
Actually the detectors remain as the beams are rotated, such that the relative orientation of the emitter and photon polarizations changes wrt the detectors, without effecting the coincidence rate. The very purpose of rotating the beam is to change emitter and photons orientation wrt the detectors. Using the same predefined photons, it even changes which individual photons take which path through the polarizers, yet the coincidence rates remain. I can also define a bit field for any non-zero setting. I'm attempting to rotate the polarization of the photons to be located at different positions within the bit field, to mimic this effect on the fly. So the individual photons contain this information, rather than some arbitrarily chosen coordinate system. It will also require a statistical splitting of properties if it works, which I have grave doubts.

JesseM said:
But that's just realism, it doesn't cover locality (Bohmian mechanics would match that notion of realism for example). I think adding locality forces you to conclude that each basic element of reality is associated with a single point in spacetime, and is causally affected only by things in its own past light cone.
Would a local theory with "elements of reality" which dynamically generate but do not posses pre-existing properties qualify as a "realistic" theory? I think your perception what I think about points in spacetime is distorted by the infinite density assumption, much like Einstein's thinking. Such scale gauges, to recover the hierarchical structure of the standard model, tend to be open parameters in deciding a theoretical construct to investigate. At a fundamental level, lacking any hierarchy, gauges lose meaning due to coordinate independence. The infinite density assumption presumes a pre-existing meaning to scale. It might be better to think in terms of non-standard calculus to avoid vague or absolutist (as in absolutely solid) notions of infinitesimals. Any reasonable conception of infinitesimals in set theory indicates the "solid" presumption is the most extreme case of an extreme range of possibilities. Whole transfinite hierarchies of limits exist in the interim.
 
Last edited by a moderator:
  • #1,012
billschnieder said:
I think it is a mistake to think that "unfair sampling" is only referring to detection rate.
Weird, that was the entire point of several post. Yet here I am making the mistake of claiming what I spent all these post refuting? Just weird.
 
  • #1,013
my_wan said:
Weird, that was the entire point of several post. Yet here I am making the mistake of claiming what I spent all these post refuting? Just weird.
Oh not your mistake. I was agreeing with you from a different perspective, there is a missing also somewhere there!
 
  • #1,014
DevilsAvocado said:
Ahh! Now I get it! Thanks for explaining. My guess on this specific case, is that it’s very easy to change the detection window (normally 4-6 ns?) to look for dramatic changes... and I guess that in all of the thousands EPR-Bell experiments, this must have been done at least once...? Maybe DrC knows?
Yes, it may be possible to refute this by recording time stamps and analyzing any continuity in time offsets of detections that missed the coincidence time window.

The main point remains, irrespective of experimental validity of this one example. You can't generally apply a proof invalidating a particular class instance to invalidate the whole class.

DevilsAvocado said:
Okay, you are talking about this curve, right?
Yes. You can effectively consider the curve above the x-axis as exceeding the classical 'max' limit, while the curve below the x-axis as falling short of the classical 'max' limit by the exact same amount it was exceeded in the top part.

Again. This doesn't demonstrate any consistency with any classical model of BI violations. It only indicates that in the "full universe" of "all" possible settings there are no excess of detections relative to the classical limit. Thus certain forms of "fair sampling" arguments are not a priori invalidated by the known invalid "fair sampling" argument involving detection efficiencies. Neither does it mean that such "fair sampling" arguments can't be ruled out by other means., as indicated above.

It's difficult to maintain my main point, which involves the general applicability of a proof to an entire class or range of classes, when such a proof is known to be valid in a given class instance. My example of cases where a given constraint is abrogated is too easily interpreted as a claim or solution in itself. Or worse, reinterpreted as a class instance of the very class instance it was specifically formulated not to represent.
 
  • #1,015
DevilsAvocado said:
You could see it this way. You could also see it as the very tricky nature then has to be wobbling between "increasing/decreasing" unfair sampling, which to me makes the argument for fair sampling even stronger...
Physically it's exactly equivalent to a tennis ball being bounced off a wall taking a longer route back as the angle it hits the wall increases. It only requires the assumption that the more offset a polarizer is the longer it takes the photon to tunnel through it. Doesn't really convince me either without some testing, but certainly not something I would call nature being tricky. At least not any more tricky than even classical physics is known to be at times. Any sufficiently large set of dependent variables are going to be tricky, no matter how simple the underlying mechanisms. Especially if it looks deceptively simple on the surface.
 
  • #1,016
billschnieder said:
Oh not your mistake. I was agreeing with you from a different perspective, there is a missing also somewhere there!
Oh, my bad. I can read it with that interpretation now also. The "also" would have made it clear on the first read.
 
  • #1,017
JenniT said:
Dear Dmitry67, many thanks for quick reply. I put 2 small edits in CAPS above.

Hope that's correct?

But I do not understand your "imagine" ++ example.

Elaboration in due course would be nice.

Thank you,

JenniT

Some strings define real numbers. For example,

4.5555
pi
min root of the following equation: ... some latex code...

As any string is a word in finite alphabet, the set of all possible strings is countable, like integers. However, the set or real numbers has the power of continuum.

Call the set of all strings, which define real numbers E
Call the set of all real numbers, defined by E as X
Now exclude X from R (set of all real numbers). The result (set U) is not empty (because R is continuum and X is countable). It is even infinite.

So you have a set with infinite number of elements, for example... for example... well, if you can provide an example by writing a number itself (it is also a string) or defining it in any possible way, then you can find that string in E and the corresponding number in X. Hence there is no such number in U.

So you have a very weird set U. No element of it can be given as example. U not only illustrates that Axiom of Choice can be also contre-intuitive (while intuitively all people accept it). Imagine that some property P is true for only elements in U, and always false for elements in X. In such case you get these ghosts lke Banach-Tarski paradox...
 
Last edited:
  • #1,018
Dmitry67 said:
Some strings define real numbers. For example,

4.5555
pi
min root of the following equation: ... some latex code...

As any string is a word in finite alphabet, the set of all possible strings is countable, like integers. However, the set or real numbers has the power of continuum.

Is this quite right?

The fact that you include pi suggests that your understanding of `string' allows a string to be countably long. If so, then it is not true that the set of all possible strings is countable, even if the alphabet is finite: the set of all sequences of the two letter alphabet {1, 0} has the power of the continuum.

On the other hand, if we restrict our attention to finite strings, then the set of all finite strings in a finite alphabet *is* indeed countable. Indeed, the set of all finite strings in a *countable* alphabet is countable.
 
  • #1,019
you can define PI using a finite string: you just don't need to write all digits, you can simple write string

sqrt(12)*sum(k from 0 to inf: (-3**(-k))/(2k+1))
 
  • #1,020
I see what you're saying now and what construction you're giving - 'string' refers to the two letter symbol `pi' rather than the infinite expansion. So the strings are finite.

I don't want to derail this thread - but I thought using the notion of definability without indexing it to a language ('definability-in-L', with this concept being part of the metalanguage) lead to paradoxes.
 

Similar threads

  • · Replies 45 ·
2
Replies
45
Views
4K
  • · Replies 4 ·
Replies
4
Views
1K
Replies
20
Views
2K
Replies
3
Views
2K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 100 ·
4
Replies
100
Views
11K
  • · Replies 6 ·
Replies
6
Views
3K
Replies
11
Views
2K