Is action at a distance possible as envisaged by the EPR Paradox.

  • #1,001
DevilsAvocado said:
Have seen the code?

In the case of the De Raedt Simulation there is no "time window", only a pseudo-random number in r0:
I'm in the process of reviewing De Raedt's work. I'm not convinced of his argument, the physical interpretation is quiet a bit more complex. Even made the observation:
[PLAIN]http://arxiv.org/abs/1006.1728 said:
The[/PLAIN] EBCM is entirely classical in the sense that it uses concepts of the macroscopic world and makes no reference to quantum theory but is nonclassical in the sense that it does not rely on the rules of classical Newtonian dynamics.

My point does not depend on any such model, working or not, or even whether or not the claim itself was ultimately valid. I argued against the validity only of the argument itself, not it's claims. My point was limited to the over-generalization of interpreting the obvious invalidity of a "fair sampling" involving detection efficiencies to all "fair sampling" arguments that assumes nothing less than perfect detection efficiencies.

In this way, my argument is not dependent of De Raedt's work at all, and only came into play as an example involving DrC's rebuttal which inappropriately generalized "fair sampling" as invalid, on the basis that a class instance of "fair sampling" that assumes insufficient detection efficiencies is invalid.

DevilsAvocado said:
I don’t think this has much to do with real experiments – this is a case of trial & error and "fine-tuning".

One thing that I find 'peculiar' is the case that the angles of the of the detectors are not independently random, angle1 is random but angle2 is always at fixed value offset...?:confused:?

To me this does not look like the "real thing"...
Yes, it appears to suffer in the same way my own attempts did, but I haven't actually got that far yet in the review. If the correspondence holds, then he accomplished algebraically what I did with a quasi-random distribution of a bit field. However, when you say angle2 is always at fixed value offset, what is it always offset relative to? You can spin the photon source emitter without effect, so it's not a fixed value offset relative to the source emitter. It's not a fixed value offset relative to the the other detector. In fact, the fixed offset is relative to an arbitrary non-physical coordinate choice, which itself can be arbitrarily chosen.

I still need a better argument to fully justify this non-commutativity between arbitrary coordinate choices, but the non-commutativity of classical vector products may pay a role.

Again, my latest argument is not predicated on De Raedt's work or any claim that BI violations can or can't be classically modeled. My argument was limited to, and only to, the use of applying the invalidity of an "unfair sampling" involving limited detection efficiencies to "unfair sampling" not involving any such limits in detection efficiencies. It's a limit of what can be claimed as a proof, and involves no statements about how nature is.
 
Last edited by a moderator:
Physics news on Phys.org
  • #1,002
JenniT said:
Dear my_wan;

This is very interesting to me. I would love to see some expansion on the points you are advancing, especially about this:

Class instances by definition contains constraints not shared by the entire class, and the set of all instances of a class remains undefined within science.

Many thanks,

JenniT
I give an example in post #993, when I described two different "fair sampling" arguments. One involving variations in detection statistics, the other involving variations involving detection timing. The point was not that either is a valid explanation of BI violations, the point was that proving the first instance is invalid in the EPR context does not rule out the second instance. Yet they are both members of the same class called "fair sampling" arguments. This was only an example, not a claim of a resolution to BI violations.

Dmitry67 said:
You can find examples in the set theory.
This is a tricky subject closely connected to the Axiom of Choice (if I understood the idea correctly).
Yes! I personally think it likely you have made fundamental connection that gets a bit deeper than what I could do more than hint at in the context of the present debate. :biggrin:
 
  • #1,003
my_wan said:
DevilsAvocado said:
Have seen the code?

In the case of the De Raedt Simulation there is no "time window", only a pseudo-random number in r0:
I'm in the process of reviewing De Raedt's work. I'm not convinced of his argument, the physical interpretation is quiet a bit more complex.

In this way, my argument is not dependent of De Raedt's work at all, and only came into play as an example involving DrC's rebuttal which inappropriately generalized "fair sampling" as invalid, on the basis that a class instance of "fair sampling" that assumes insufficient detection efficiencies is invalid.

The De Raedt simulation is an attempt to demonstrate that there exists an algorithm whereby (Un)Fair Sampling leads to a violation of a BI - as observed - while the full universe does not (as required by Bell). They only claim that their hypothesis is "plausible" and do not really claim it as a physical model. A physical model based on their hypothesis would be falsifiable. Loosely, their idea is that a photon might be delayed going through the apparatus and the delay might depend on physical factors. Whether you find this farfetched or not is not critical to the success of their simulation. The idea is that it is "possible".

My point has been simply that it is not at all certain that a simulation like that of De Raedt can be successfully constructed. So that is what I am looking at. I believe that there are severe constraints and I would like to see these spelled out and documented. Clearly, the constraint of the Entangled State / Product State mentioned in the other thread is a tought one. But as of this minute, I would say they have passed the test.

At any rate, they acknowledge that Bell applies. They do not assert that the full universe violates a BI.
 
  • #1,004
DevilsAvocado said:
It means exactly the same as "fair sampling assumption": That the sample of detected pairs is representative of the pairs emitted.
Yes, the "fair sampling assumption" does assume the sample of detected pairs is representative of the pairs emitted, and assuming otherwise is incongruent with the experimental constraints, thus invalid. An alternative "fair sampling assumption" assumes that the time taken to register a detection is the same regardless of the detector offsets. The invalidity of the first "fair sampling assumption" does not invalidate the second "fair sampling assumption". It's doesn't prove it's valid either, but neither is the claim that the invalidity of the first example invalidates the second.

DevilsAvocado said:
I.e. we are not assuming that nature is really a tricky bastard, by constantly not showing us the "enhancements" that would spoil all EPR-Bell experiments, all the time. :biggrin:
Again, tricky how. We know it's tricky in some sense. Consider the event timing verses event detection rates in the above example. If you bounce a tennis ball off the wall, its return time is dependent on the angle it hits the wall in front of you. It's path length is also dependent on the angle it hits the wall. Is nature being "tricky" doing this? Is nature being "tricky" if it takes longer to detect a photon passing a polarizer at an angle, than it takes if the polarizer has a common, or more nearly common, polarization as the photon? I wouldn't call that "tricky", any more than a 2 piece pyramid puzzle is. In years only 1 person I met, that hadn't seen it before, was able to solve it without help.
http://www.puzzle-factory.com/pyramid-2pc.html
We already know the speed of light is different in mediums with a different index of refraction.

DevilsAvocado said:
What does this really mean??
This was in reference to "full universe". DrC and I did use it in a slightly different sense. DrC used it to mean mean any possible 'set of' detector settings. I used it to mean 'all possible' detector settings. I'll explain the consequences in more detail below.

DevilsAvocado said:
my_wan, no offence – but is this the "full universe" of detector settings?:bugeye:?

I don’t get this. What on Earth has cos^2(22.5) + cos^2(67.5) = 1 to do with the "fair sampling assumption"...?

Do you mean that we are constantly missing photons that would, if they were measured, always set correlation probability to 1?? I don’t get it...
No, there are NO photon detections missing! Refer back to post #993. The only difference is in how fast the detection occurs, yet even this is an example, not a claim. If 2 photons hit 2 different detectors at the same time, but one of them takes longer to register the detection, then they will not appear correlated because they appeared to occur at 2 separate times. Not one of the detections is missing, only delayed.

Ok, here's the "full universe" argument again, in more detail.
The classical limit, as defined, sets a maximum correlation rate for any given setting offset. QM predicts, and experiments support, that for the offsets between 0 and 45 degrees the maximum classical limit is exceeded. QM also predicts that, for the angles between 45 and 90 degrees, the QM correlations are less than the classical limit. This is repeated on every 90 degree segment. If you add up all the extra correlations between 0 and 45 degrees, that exceed the classical limit, and add it to the missing correlations between 45 and 90 degrees, that the classical limit allows, you end up with ZERO extra correlations. Repeat for the other 3 90 degree segments and 4 x 0 = 0. QM does not predict any extra correlations when you average over all possible settings. It only allows you to choose certain limited non-random settings where the classical limit is exceeded, which presents problems for classical models.
 
Last edited:
  • #1,005
DrC,
Please note that my argument has nothing to do with the De Raedt simulation. It was merely an example of overextending the lack of validity of a fair sampling argument involving limited detection efficiencies to fair sampling arguments that could remain valid even if detection efficiencies were always absolutely perfect.
 
  • #1,006
DrC, two questions,
1) Do you agree that "fair sampling" assumptions exist, irrespectively of validity, that does not involve the assumption that photon detection efficiencies are less than perfect?
2) Do you agree that averaged over all possible settings, not just a choice some subset of settings, that the QM and classical correlation limit leads to the same overall total number of detections?
 
  • #1,007
my_wan said:
However, when you say angle2 is always at fixed value offset, what is it always offset relative to?

angle2 = angle1 + Radians(Theta) ' fixed value offset always


And Theta is a (user) argument into the main function.

my_wan said:
Again, tricky how.
DevilsAvocado said:
To me, this is as natural as when you grab hand of white sand on a white beach, you don’t assume that every grain of sand that you didn’t get into your hand... is actually black! :wink:


my_wan said:
No, there are NO photon detections missing! Refer back to post #993. The only difference is in how fast the detection occurs, yet even this is an example, not a claim. If 2 photons hit 2 different detectors at the same time, but one of them takes longer to register the detection, then they will not appear correlated because they appeared to occur at 2 separate times. Not one of the detections is missing, only delayed.

Ahh! Now I get it! Thanks for explaining. My guess on this specific case, is that it’s very easy to change the detection window (normally 4-6 ns?) to look for dramatic changes... and I guess that in all of the thousands EPR-Bell experiments, this must have been done at least once...? Maybe DrC knows?

my_wan said:
Ok, here's the "full universe" argument again, in more detail.
The classical limit, as defined, sets a maximum correlation rate for any given setting offset. QM predicts, and experiments support, that for the offsets between 0 and 45 degrees the maximum classical limit is exceeded. QM also predicts that, for the angles between 45 and 90 degrees, the QM correlations are less than the classical limit. This is repeated on every 90 degree segment.

Okay, you are talking about this curve, right?

2wr1cgm.jpg
 
  • #1,008
my_wan said:
If you add up all the extra correlations between 0 and 45 degrees, that exceed the classical limit, and add it to the missing correlations between 45 and 90 degrees, that the classical limit allows, you end up with ZERO extra correlations.

You could see it this way. You could also see it as the very tricky nature then has to be wobbling between "increasing/decreasing" unfair sampling, which to me makes the argument for fair sampling even stronger...
 
  • #1,009
DA, nice recent (long) post, #985. Sorry for the delay in replying. I've been busy with holiday activities. Anyway, I see that there have been some replies to (and amendents or revisions by you of) your post. I've lost count of how many times I've changed my mind on how to approach understanding both Bell and entanglement correlations. One consideration involves the proper interpretation of Bell's work and results wrt LHV or LR models of entanglement. Another consideration involves the grounds for assuming nonlocality in nature. And yet another consideration involves approaches to understanding how light might be behaving in optical Bell tests to produce the observed correlations, without assuming nonlocality. The latter involves quantum optics. Unfortunately, qo doesn't elucidate instrument-independent photon behavior (ie., what's going on between emission and filtration/detection). So, there's some room for speculation there (not that there's any way of definitively knowing whether a proposed, and viable, 'realistic' model of 'interim' photon behavior corresponds to reality). In connection with this, JenniT is developing an LR model in the thread on Bell's mathematics, and Qubix has provided a link to a proposed LR model by Joy Christian.

Anyway, it isn't like these are easy question/considerations.

Here's a paper that I'm reading which you might be interested in:

http://arxiv.org/PS_cache/arxiv/pdf/0706/0706.2097v2.pdf

And here's an article in the Stanford Encyclopedia of Philosophy on the EPR argument:

http://plato.stanford.edu/entries/qt-epr/#1.2

Pay special attention to Einstein on locality/separability, because it has implications regarding why Bell's LHV ansatz might be simply an incorrect model of the experimental situation rather than implying nonlocality in nature.

Wrt to your exercises illustrating the difficulty of understanding the optical Bell test correlations in terms of specific polarization vectors -- yes, that is a problem. It's something that probably most, or maybe all, of the readers of this thread have worked through. It suggests a few possibilities: (1) the usual notion/'understanding' of polarization is incorrect or not a comprehensive physical description, (2) the usual notion/'understanding' of spin is incorrect or not a comprehensive physical description, (3) the concepts are being misapplied or inadequately/incorrectly modeled, (4) the experimental situation is being incorrectly modeled, (5) the dynamics of the reality underlying instrumental behavior is significantly different from our sensory reality/experience, (6) there is no reality underlying instrumental behavior or underlying our sensory reality/experience, etc., etc. My current personal favorites are (3) and (4), but, of course, that could change. Wrt fundamental physics, while there's room for speculation, one still has to base any speculations on well established physical laws and dynamical principles which are, necessarily, based on real physical evidence (ie. instrumental behavior, and our sensory experience, our sensory apprehension of 'reality' -- involving, and evolving according to, the scientific method of understanding).

And now, since I have nothing else to do for a while, I'll reply to a few of your statements. Keep a sense of humor, because I feel like being sarcastic.

DevilsAvocado said:
ThomasT, I see you and billschnieder spend hundreds of posts in trying to disprove Bell's (2) with various farfetched arguments, believing that if Bell's (2) can be proven wrong – then Bell's Theorem and all other work done by Bell will go down the drain, including nonlocality.
My current opinion is that Bell's proof of the nonviability of his LHV model of entanglement doesn't warrant the assumption of nonlocality. Why? Because, imo, Bell's (2) doesn't correctly model the experimental situation. This is what billschnieder and others have shown, afaict. There are several conceptually different ways to approach this, and so there are several conceptually different ways of showing this, and several conceptually different proposed, and viable, LR, or at least Local Deterministic, models of entanglement.

If any of these approaches is eventually accepted as more or less correct, then, yes, that will obviate the assumption of nonlocality, but, no, that will not flush all of Bell's work down the drain. Bell's work was pioneering, even if his LHV ansatz is eventually accepted as not general and therefore not implying nonlocality.

DevilsAvocado said:
The aim of the EPR paradox was to show that there was a preexisting reality at the microscopic QM level - that the QM particles indeed had a real value before any measurements were performed (thus disproving Heisenberg uncertainty principle HUP).

To make the EPR paper extremely short; If we know the momentum of a particle, then by measuring the position on a twin particle, we would know both momentum & position for a single QM particle - which according to HUP is impossible information, and thus Einstein had proven QM to be incomplete ("God does not play dice").
The papers I referenced above have something to say about this.

DevilsAvocado said:
Do you understand why we get upset when you and billschnieder argue the way you do?
Yes. Because you're a drama queen. But we're simply presenting and analyzing and evaluating ideas. There should be no drama related to that. Just like there's no crying in baseball. Ok?

DevilsAvocado said:
You are urging PF users to read cranky papers - while you & billschnieder obviously hasn’t read, or understand, the original Bell paper that this is all about??
I don't recall urging anyone to read cranky papers. If you're talking about Kracklauer, I haven't read all his papers yet, so I don't have any opinion as to their purported (by you) crankiness. But, what I have read so far isn't cranky. I think I did urge 'you' to read his papers, which would seem to be necessary since you're the progenitor, afaik, of the idea that Kracklauer is a crank and a crazy person.

The position you've taken, and assertions you've made, regarding Kracklauer, put you in a precarious position. The bottom line is that the guy has some ideas that he's promoting. That's all. They're out there for anyone to read and criticize. Maybe he's wrong on some things. Maybe he's wrong on everything. So what? Afaict, so far, he's far more qualified than you to have ideas about and comment on this stuff. Maybe he's promoting his ideas too zealously for your taste or sensibility. Again, who cares? If you disagree with an argument or an idea, then refute it if you can.

As for billschnieder and myself reading Bell's papers, well of course we've read them. In fact, you'll find somewhere back in this thread where I had not understood a part of the Illustrations section, and said as much, and changed my assessment of what Bell was saying wrt it.

And of course it's possible, though not likely, that neither billschnieder nor I understand what Bell's original paper was all about. But I think it's much more likely that it's you who's missing some subleties wrt its interpretation. No offense of course.

Anyway, I appreciate your most recent lengthy post, and revisions, and most of your other posts, as genuine attempts by you to understand the issues at hand. I don't think that anybody fully understands them yet. So physicists and philosophers continue to discuss them. And insights into subtle problems with Bell's formulation, and interpretations thereof, continue to be presented, along with LR models of entanglement that have yet to be refuted.

Please read the stuff I linked to. It's written by bona fide respected physicists.

And, by the way, nice recent posts, but the possible experimental 'loopholes' (whether fair sampling/detection, or coincidence, or communication, or whatever) have nothing to do with evaluating the meaning of Bell's theorem. The correlation between the angular difference of the polarizers and coincidental detection must be, according to empirically established (and local) optical laws, a sinusoidal function, not a linear one.
 
  • #1,010
my_wan said:
To make the difference between an experimentally invalid "unfair sampling" argument, involving detection efficiencies, and more general "fair sampling" arguments more clear, consider:

You have a single pair of photons. They are both detected within a time window, thus a coincidence occurs. Now suppose you chose different settings and detected both photons, but they didn't fall within the coincidence window. Now in both cases you had a 100% detection rate, so "fair sampling", defined in terms of detections efficiencies, is absolutely invalid. Yet, assuming the case defined holds, this was a "fair sampling" argument that did not involve detection efficiencies, and can not be ruled out by perfectly valid arguments against "fair sampling" involving detection efficiencies.

I think it is a mistake to think that "unfair sampling" is only referring to detection rate. The CHSH inequality is the following:

|E(a,b) + E(a,b') + E(a',b) - E(a',b')| <= 2

It is true that in deriving this, Bell assumed every photon/particle was detected given that his A(.) and B(.) functions are defined as two-valued functions (+1, -1) rather than three-valued functions with a non-detection outcome included. An important point to note here is (1) there is a P(λ), implicit in each of the expectation value terms in that inequality, and Bell's derivation relies on the fact that P(λ) is exactly the same probability distribution for each and every term in that inequality.

Experimentally, not all photons are detected, so the "fair sampling assumption" together with "coincidence circuitry" is used to overcome that problem. Therefore the "fair sampling assumption" is invoked in addition to the coincidence counting to state that the detected coincident photons are representative of the full universe of photon pairs leaving the source.

The next important point to remember is this; (2) in real experiments each term in the inequality is a conditional expectation value, conditioned on "coincidence". The effective inequality being calculated in a real experiment is therefore:

|E(a,b|coinc) + E(a,b'|coinc) + E(a',b|coinc) - E(a',b'|coinc)| <= 2

So then looking at both crucial points above and remember the way experiments are actually performed we come to understand that "fair sampling assumption" entails the following:

1) P(coinc) MUST be independent of λ
2) P(coinc) MUST be independent of a and/or b (ie joined channel efficiencies must be factorizable)
3) P(λ) MUST be independent of a and/or b
4) If for any specific setting pair(a,b), the probability of "non-consideration" of a photon pair (ie, no coincidence) is dependent on the hidden parameter λ, then (1), (2) and (3) will fail, and together with them, the "fair sampling assumption" will fail.

The question then becomes, is it unreasonable to expect that for certain hidden λ, P(coinc) will not be the same in all 4 terms and therefore P(λ) can not be expected to always be the same for all 4 terms?

In fact (2) has been put to the test using real data from the Weihs et al experiment and failed. See the article here (http://arxiv4.library.cornell.edu/abs/quant-ph/0606122 , J. Phys. B 40 No 1 (2007) 131-141)
Abstract:
We analyze optical EPR experimental data performed by Weihs et
al. in Innsbruck 1997-1998. We show that for some linear combinations of the
raw coincidence rates, the experimental results display some anomalous behavior
that a more general source state (like non-maximally entangled state) cannot
straightforwardly account for. We attempt to explain these anomalies by taking
account of the relative efficiencies of the four channels. For this purpose, we use the fair
sampling assumption, and assume explicitly that the detection efficiencies for the pairs
of entangled photons can be written as a product of the two corresponding detection
efficiencies for the single photons. We show that this explicit use of fair sampling cannot
be maintained to be a reasonable assumption as it leads to an apparent violation of
the no-signalling principle.
 
Last edited by a moderator:
  • #1,011
Note that I am describing classes of realistic constructs, to demonstrate the absurdity of generalizing the refutation of a single class instance of a realism class to represent a refutation of realism in general. It goes to the lagitamacy of this generalization of realism, as defined by EPR, not to any given class or class instance described.

The most surprising result of such attempts at providing examples realism models that explicitly at odds with realism as defined by EPR, is I'm often paraphrased as requiring what these example model classes are explicitly formulated to reject. Namely: 1) That observables are representative indicators of elements of reality. 2) Real observables are linear representative indicators of such elements. 3) Properties are pre-existing (innate) to such elements. These are all presumptuous, but are diametrically opposed to realism as defined by EPR, thus such constructive elements of reality are not addressed by BI, with or without locality.

JesseM said:
But Bell's proof is abstract and mathematical, it doesn't depend on whether it is possible to simulate a given hidden variables theory computationally, so why does it matter what the "computational demands of modeling BI violations" are? I also don't understand your point about a transfinite set of hidden variables and Hilbert's Hotel paradox...do you think there is some specific step in the proof that depends on whether lambda stands for a finite or transfinite number of facts, or that would be called into question if we assumed it was transfinite?
I understand the mathematical abstraction BI is based on. It is because the mathematics is abstract that the consequent assumptions of the claims goes beyond validity of BI. Asher Peres notes that "element of reality" are identified with the EPR definition. He also notes the extra assumption that the sum or product of two commuting elements of reality also is an element of reality. In:
http://www.springerlink.com/content/g864674334074211/"
He outlines the algebraic contradiction that ensues from these assumptions. On what basis is these notions of realism predicated? If "elements of reality" exist, how justified are we in presuming that properties are innate to these elements?

Our own DrC has written some insightful comments concerning realism, refuting Hume, in which it was noted how independent variables must be unobservable. If all fundamental variables are in some sense independent, how do we get observables? My guess is that observables are a propagation of events, not things. Even the attempt to detect an "elements of reality" entails the creation of events, where what's detected is not the "elements of reality" but the propagation observables (event sets) created by the events, not the properties of "elements of reality".

Consider a classical analog involving laminar verses turbulent flow, and suppose you could only define density in terms of the event rates (collisions in classical terms) in the medium. The classical notion of particle density disappears. This is at a fundamental level roughly the basis of many different models, involving both GR, QM, and some for QG. Erik Verlinde is taking some jousting from his colleagues for a preprint along roughly similar lines.

The point here is that associating properties are something owned by things is absurdly naive, and even more naive to assume real properties are commutative representations of things (think back to the event rate example). This is also fundamentally what is meant by "statistically complete variables" in published literature.

Now you can object to it not being "realistic" on the basis of not identifying individual "elements of reality", but if the unobservability argument above is valid, on what grounds do you object to a theory that doesn't uniquely identify unobservables (independent elements of reality)? Is that justification for a claim of non-existence?

JesseM said:
I'm not sure what you mean by "projections from a space"...my definition of local realism above was defined in terms of points in our observable spacetime, if an event A outside the past light cone of event B can nevertheless have a causal effect on B then the theory is not local realist theory in our spacetime according to my definition, even if the values of variables at A and B are actually "projections" from a different unseen space where A is in the past light cone of B (is that something like what you meant?)
Consider a standard covariant transform in GR. A particular observers perspective is a "projection" of this curved space onto the Euclidean space our perceptions are predisposed to. Suppose we generalize this even further, to include the Born rule, |/psi|^2, such that a mapping of a set of points involves mapping them onto a powerset of points. Aside from the implications in set theory, this leads to non-commutativity even if the variables are commutative within the space that defines them. Would such a house of mirrors distortion of our observer perspective of what is commutative invalidate "realism", even when those same variables are commutative in the space that defined them?

Again, this merely points to the naivety of "realism" as has been invalidated by BI violations. What BI violations don't do is invalidate "realism", or refute that "elements of reality" exist that is imposing this house of mirrors effect on our observation of observables. Assuming we observe "reality" without effect on it is magical thinking from a realist perspective. Assuming we are a product of these variables, while assuming 'real' variables must remain commutative is as naive as the questions on this forum asking why doubling speed more than doubles the kinetic energy. But if your willing to just "shut up and calculate" it's never a problem.

JesseM said:
They did make the claim that there should in certain circumstances be multiple elements of reality corresponding to different possible measurements even when it is not operationally possible to measure them all simultaneously, didn't they?
Yes, but that is only a minimal extension to the point I'm trying to make, not a refutation of it. This corresponds to certain classical contextuality schemes attempted to model BI violations. The strongest evidence against certain types of contextuality schemes, from my perspective, involves metamaterials and other such effects, not BI violations. I think Einstein's assumptions of what constraints realism imposes is overly simplistic, but that doesn't justify the claim that "elements of reality" don't exist.

JesseM said:
I don't follow, what "definitions counter to that EPR provided" are being rejected out of hand?
Are you trying to say here that no "realism" is possible that doesn't accept "realism" as operationally defined by EPR? The very claim that BI violations refute "realism" tacitly makes this claim. If you predicate "realism" on the strongest possible realism, then the notion that a fundamental part has properties is tantamount to claiming it contains a magic spell. It would also entail that measuring without effect is telepathy, and at a fundamental level such an effect must be at least as big as what you want to measure. The Uncertainty Principle, as originally derived, was due to these very thought experiments involving realistic limits, not QM.

So as long as you insist that a local theory cannot be "realistic", even by stronger definitions of realism than EPR provided, then you are rejecting realism "definitions counter to that EPR provided". Have I not provided examples and justification for "realism" definitions that are counter to the EPR definition? Those examples are not claims of reality, they are examples illustrating the naivety of the constraints imposed on the notion of realism and justified on the EPR argument.

JesseM said:
What's the statement of mine you're saying "unless" to? I said "there's no need to assume ... you are simply measuring a pre-existing property which each particle has before measurement", not that this was an assumption I made. Did you misunderstand the structure of that sentence, or are you actually saying that if "observable are a linear projection from a space which has a non-linear mapping to our measured space of variables", then that would mean my statement is wrong and that there is a need to assume we are measuring pre-existing properties the particle has before measurement?
I said "unless" to "there's no need to assume, [...], you are simply measuring a pre-existing property". This was only an example, in which a "pre-existing property" does not exist, yet both properties and "elements of reality do. I give more detail on mappng issue with the Born rule above. These examples are ranges of possibilities that exist within certain theoretical class instances as well as in a range of theoretical classes. Yet somehow BI violations is supposed to trump every class and class instance and disprove realism if locality is maintained. I don't think so.

You got the paraphrase sort of right until you presumed I indicated, "is a need to assume we are measuring pre-existing properties the particle has before measurement". No, I'm saying the lack of pre-existing properties says nothing about the lack of pre-existing "elements of reality". Nor does properties dynamically generated by "elements of reality" a priori entail any sort of linearity between "elements of reality" and properties, at any level.

JesseM said:
Why would infinite or non-compressible physical facts be exceptions to that? Note that when I said "can be defined" I just meant that a coordinate-independent description would be theoretically possible, not that this description would involve a finite set of characters that could be written down in practice by a human. For example, there might be some local variable that could take any real number between 0 and 1 as a value, all I meant was that the value (known by God, say) wouldn't depend on a choice of coordinate system.
Why would infinite indicate non-compressible? If you define an infinite set of infinitesimals in a arbitrary region, why would that entail even a finite subset of that space is occupied? Even id a finite subset of that space was occupied, it still doesn't entail that it's a solid. Note my previous reference to Hilbert's paradox of the Grand Hotel. Absolute density wouldn't even have a meaning. Yes, a coordinate-independent description would be theoretically possible, yet commutativity can be dependent on a coordinate transform. You can make a gravitational field go away by the appropriate transform, but you can't make its effects on a given observers perspective go away. The diffeomorphism remains under any coordinate choice, and what appears linear in one coordinate choice may not be under another coordinate choice.

JesseM said:
As you rotate the direction of the beams, are you also rotating the positions of the detectors so that they always lie in the path of the beams and have the same relative angle between their orientation and the beam? If so this doesn't really seem physically equivalent to rotating the detectors, since their the relative angle between the detector orientation and the beam would change.
Actually the detectors remain as the beams are rotated, such that the relative orientation of the emitter and photon polarizations changes wrt the detectors, without effecting the coincidence rate. The very purpose of rotating the beam is to change emitter and photons orientation wrt the detectors. Using the same predefined photons, it even changes which individual photons take which path through the polarizers, yet the coincidence rates remain. I can also define a bit field for any non-zero setting. I'm attempting to rotate the polarization of the photons to be located at different positions within the bit field, to mimic this effect on the fly. So the individual photons contain this information, rather than some arbitrarily chosen coordinate system. It will also require a statistical splitting of properties if it works, which I have grave doubts.

JesseM said:
But that's just realism, it doesn't cover locality (Bohmian mechanics would match that notion of realism for example). I think adding locality forces you to conclude that each basic element of reality is associated with a single point in spacetime, and is causally affected only by things in its own past light cone.
Would a local theory with "elements of reality" which dynamically generate but do not posses pre-existing properties qualify as a "realistic" theory? I think your perception what I think about points in spacetime is distorted by the infinite density assumption, much like Einstein's thinking. Such scale gauges, to recover the hierarchical structure of the standard model, tend to be open parameters in deciding a theoretical construct to investigate. At a fundamental level, lacking any hierarchy, gauges lose meaning due to coordinate independence. The infinite density assumption presumes a pre-existing meaning to scale. It might be better to think in terms of non-standard calculus to avoid vague or absolutist (as in absolutely solid) notions of infinitesimals. Any reasonable conception of infinitesimals in set theory indicates the "solid" presumption is the most extreme case of an extreme range of possibilities. Whole transfinite hierarchies of limits exist in the interim.
 
Last edited by a moderator:
  • #1,012
billschnieder said:
I think it is a mistake to think that "unfair sampling" is only referring to detection rate.
Weird, that was the entire point of several post. Yet here I am making the mistake of claiming what I spent all these post refuting? Just weird.
 
  • #1,013
my_wan said:
Weird, that was the entire point of several post. Yet here I am making the mistake of claiming what I spent all these post refuting? Just weird.
Oh not your mistake. I was agreeing with you from a different perspective, there is a missing also somewhere there!
 
  • #1,014
DevilsAvocado said:
Ahh! Now I get it! Thanks for explaining. My guess on this specific case, is that it’s very easy to change the detection window (normally 4-6 ns?) to look for dramatic changes... and I guess that in all of the thousands EPR-Bell experiments, this must have been done at least once...? Maybe DrC knows?
Yes, it may be possible to refute this by recording time stamps and analyzing any continuity in time offsets of detections that missed the coincidence time window.

The main point remains, irrespective of experimental validity of this one example. You can't generally apply a proof invalidating a particular class instance to invalidate the whole class.

DevilsAvocado said:
Okay, you are talking about this curve, right?
Yes. You can effectively consider the curve above the x-axis as exceeding the classical 'max' limit, while the curve below the x-axis as falling short of the classical 'max' limit by the exact same amount it was exceeded in the top part.

Again. This doesn't demonstrate any consistency with any classical model of BI violations. It only indicates that in the "full universe" of "all" possible settings there are no excess of detections relative to the classical limit. Thus certain forms of "fair sampling" arguments are not a priori invalidated by the known invalid "fair sampling" argument involving detection efficiencies. Neither does it mean that such "fair sampling" arguments can't be ruled out by other means., as indicated above.

It's difficult to maintain my main point, which involves the general applicability of a proof to an entire class or range of classes, when such a proof is known to be valid in a given class instance. My example of cases where a given constraint is abrogated is too easily interpreted as a claim or solution in itself. Or worse, reinterpreted as a class instance of the very class instance it was specifically formulated not to represent.
 
  • #1,015
DevilsAvocado said:
You could see it this way. You could also see it as the very tricky nature then has to be wobbling between "increasing/decreasing" unfair sampling, which to me makes the argument for fair sampling even stronger...
Physically it's exactly equivalent to a tennis ball being bounced off a wall taking a longer route back as the angle it hits the wall increases. It only requires the assumption that the more offset a polarizer is the longer it takes the photon to tunnel through it. Doesn't really convince me either without some testing, but certainly not something I would call nature being tricky. At least not any more tricky than even classical physics is known to be at times. Any sufficiently large set of dependent variables are going to be tricky, no matter how simple the underlying mechanisms. Especially if it looks deceptively simple on the surface.
 
  • #1,016
billschnieder said:
Oh not your mistake. I was agreeing with you from a different perspective, there is a missing also somewhere there!
Oh, my bad. I can read it with that interpretation now also. The "also" would have made it clear on the first read.
 
  • #1,017
JenniT said:
Dear Dmitry67, many thanks for quick reply. I put 2 small edits in CAPS above.

Hope that's correct?

But I do not understand your "imagine" ++ example.

Elaboration in due course would be nice.

Thank you,

JenniT

Some strings define real numbers. For example,

4.5555
pi
min root of the following equation: ... some latex code...

As any string is a word in finite alphabet, the set of all possible strings is countable, like integers. However, the set or real numbers has the power of continuum.

Call the set of all strings, which define real numbers E
Call the set of all real numbers, defined by E as X
Now exclude X from R (set of all real numbers). The result (set U) is not empty (because R is continuum and X is countable). It is even infinite.

So you have a set with infinite number of elements, for example... for example... well, if you can provide an example by writing a number itself (it is also a string) or defining it in any possible way, then you can find that string in E and the corresponding number in X. Hence there is no such number in U.

So you have a very weird set U. No element of it can be given as example. U not only illustrates that Axiom of Choice can be also contre-intuitive (while intuitively all people accept it). Imagine that some property P is true for only elements in U, and always false for elements in X. In such case you get these ghosts lke Banach-Tarski paradox...
 
Last edited:
  • #1,018
Dmitry67 said:
Some strings define real numbers. For example,

4.5555
pi
min root of the following equation: ... some latex code...

As any string is a word in finite alphabet, the set of all possible strings is countable, like integers. However, the set or real numbers has the power of continuum.

Is this quite right?

The fact that you include pi suggests that your understanding of `string' allows a string to be countably long. If so, then it is not true that the set of all possible strings is countable, even if the alphabet is finite: the set of all sequences of the two letter alphabet {1, 0} has the power of the continuum.

On the other hand, if we restrict our attention to finite strings, then the set of all finite strings in a finite alphabet *is* indeed countable. Indeed, the set of all finite strings in a *countable* alphabet is countable.
 
  • #1,019
you can define PI using a finite string: you just don't need to write all digits, you can simple write string

sqrt(12)*sum(k from 0 to inf: (-3**(-k))/(2k+1))
 
  • #1,020
I see what you're saying now and what construction you're giving - 'string' refers to the two letter symbol `pi' rather than the infinite expansion. So the strings are finite.

I don't want to derail this thread - but I thought using the notion of definability without indexing it to a language ('definability-in-L', with this concept being part of the metalanguage) lead to paradoxes.
 
  • #1,021
my_wan said:
DrC, two questions,
1) Do you agree that "fair sampling" assumptions exist, irrespectively of validity, that does not involve the assumption that photon detection efficiencies are less than perfect?
2) Do you agree that averaged over all possible settings, not just a choice some subset of settings, that the QM and classical correlation limit leads to the same overall total number of detections?

The Fair Sampling Assumption is: due to some element(s) of the collection and detection apparati, either Alice or Bob (or both) did not register a member of an entangled photon pair that "should" have been seen. AND FURTHER, that photon, if detected, was one which would support the predictions of local realism and not QM. The Fair Sampling concept makes no sense if it is not misleading us.

It is not the Fair Sampling Assumption to say that the entire universe is not sampled. That is a function of the scientific method and applies to all experiments. Somewhat like saying the speed of light is 5 km/hr but that experiments measuring the usual value are biased because we chose an unfair day to sample. The requirement for science is that the experiment is repeatable, which is not in question with Bell tests. The only elements of Fair Sampling that should be considered are as I describe in the paragraph above.

So I think that is a YES to your 1), as you might detect a photon but be unable to match it to its partner. Or it might have been lost before arriving at the detector.

For 2), I am not sure I follow the question. I think QM and LR would make the same predictions for likelihood of detection. But I guess that to make the LR model work out, you have to find some difference. But no one was looking for that until Bell tests started blowing away LR theories.
 
  • #1,022
DevilsAvocado said:
You could see it this way. You could also see it as the very tricky nature then has to be wobbling between "increasing/decreasing" unfair sampling, which to me makes the argument for fair sampling even stronger...

So true. Seems sort of strange to suggest that some photons are over-counted and some are under-counted... and that depends on the angle between. And whether they support LR or QM as to whether they are over or under counted.

We know that for some angles - say 60 degrees - the predictions are noticably different for QM (25%) vs LR (33.3%). Considering these are averages of correlated and uncorrelated pairs, that means that 1 in 4 of the correlated pairs was undercounted - but NONE of the uncorrelated pairs was undercounted! That is reeeeeeeeaaaaaallllllly asking a lot if you think about it.

But then at 30 degrees it works the OTHER way. It's QM (75%) vs LR (66.6%) now. So suddenly: 1 in 4 of the UNcorrelated pairs are undercounted - but NONE of the correlated pairs are undercounted!
 
  • #1,023
I really really don't get the equivocation in your responses, unless it's to intentionally maintain a conflation. I'll demonstrate:
(Going to color code sections of your response, and index my response to those colors.)

DrChinese said:
The Fair Sampling Assumption is: due to some element(s) of the collection and detection apparati, either Alice or Bob (or both) did not register a member of an entangled photon pair that "should" have been seen. AND FURTHER, that photon, if detected, was one which would support the predictions of local realism and not QM. The Fair Sampling concept makes no sense if it is not misleading us.
{RED} - The question specifically avoided any detection failures whatsoever. It has no bearing whatsoever on the question, but ok. Except that you got less specific in blue for some reason.
{BLUE} - Here when you say "would support", by what model would the "would support" qualify? In fact one of the many assumptions contained in "would support" qualifier involves how you chose define the equivalence of simultaneity between two spatially separated time intervals. Yet with "would support" you are tacitly requiring a whole range of assumptions to be the a priori truth. It logically simplifies to the statement: It's true because I chose definitions to make it true.


DrChinese said:
It is not the Fair Sampling Assumption to say that the entire universe is not sampled. That is a function of the scientific method and applies to all experiments. Somewhat like saying the speed of light is 5 km/hr but that experiments measuring the usual value are biased because we chose an unfair day to sample. The requirement for science is that the experiment is repeatable, which is not in question with Bell tests. The only elements of Fair Sampling that should be considered are as I describe in the paragraph above.
{RED} - But you did not describe any element in the paragraph above. You merely implied such elements are contained in the term "would support", and left it to our imagination that since "would support" defines itself to contains proper methods and assumptions, and that "would support" contains it's own truth specifier, then it must be valid. Intellectually absurd.

DrChinese said:
So I think that is a YES to your 1), as you might detect a photon but be unable to match it to its partner. Or it might have been lost before arriving at the detector.
{RED} - I did not specify "a" partner was detected. I explicitly specified that BOTH partners are ALWAYS detected. Yet that still doesn't explicitly require the timing of those detections to match.
{BLUE} - Note the blue doesn't specify that the detection of the partner photon didn't fail. I explicitly specified that this partner detection didn't fail, and that only the time window to specify it as a partner failed.

So granted, you didn't explicitly reject that both partners were detected, but you did explicitly input interpretations which were explicitly defined not to exist in this context, while being vague on the both partner detections with correlation failures.

So, if I accept your yes answer, what does it mean? Does it mean both partners of a correlated pair can be detected, and still not register as a correlation? Does it mean you recognized the truth in the question, and merely chose to conflate the answer with interpretations that is by definition are invalid in the stated question, so that you can avoid an explicitly false answer while implicitly justifying the conflation of an an entirely different interpretation that was experimentally and a priori defined to be invalid?

I still don't know how to take it, and I think it presumptuous of me to assume a priori that "yes" actually accepts the question as stated. You did after all explicitly input what the question explicitly stated could not possibly be relevant, and referred to pairs in singular form while not acknowledging the success of the detection of both photons. This is a non-answer.

DrChinese said:
For 2), I am not sure I follow the question. I think QM and LR would make the same predictions for likelihood of detection. But I guess that to make the LR model work out, you have to find some difference. But no one was looking for that until Bell tests started blowing away LR theories.
True, I clearly and repeatedly, usually beginning with "Note", clarified that it did not in any way demonstrate the legitimacy of any particular LR model. All it does is demonstrate that, even if photon detection efficiency is 100%, then a model that only involves offsets in how the photon detection pairs are correlated need not result in excess or undercount of total photon detections. It was specifically designed, and failed, as an attempt to invalidate a "fair sampling" argument when that "fair sampling" argument did not involve missing detections.

There may be, as previously noted, other ways to rule out this type of bias. By recording and comparing the actual time stamps of the uncorrelated photon detections, then, if this is happening, the time window spread between near correlated photon pairs should statistically appear to increase as the angle difference was increased. If pairs of uncorrelated detections were truly uncorrelated, then there should be no statistical variance in the timing of the pairs of time stamps. The assumption that they are correlated, even when not measured to be so, is what would make such a statistical variance possible. May be worth investigating experimentally. Pre-existing raw data might be sufficient, depending on whether time stamps were recorded, or merely hit/misses recorded.
 
Last edited:
  • #1,024
my_wan said:
There may be, as previously noted, other ways to rule out this type of bias. By recording and comparing the actual time stamps of the uncorrelated photon detections, then, if this is happening, the time window spread between near correlated photon pairs should statistically appear to increase as the angle difference was increased. If pairs of uncorrelated detections were truly uncorrelated, then there should be no statistical variance in the timing of the pairs of time stamps. The assumption that they are correlated, even when not measured to be so, is what would make such a statistical variance possible. May be worth investigating experimentally. Pre-existing raw data might be sufficient, depending on whether time stamps were recorded, or merely hit/misses recorded.

That is actually what I want to do in order to demonstrate the difficulties involved with the Unfair Sampling Hypothesis. There should be a pattern to the bias if it is tenable.

The time stamps are recorded at each station whenever a detection is made. There are 2 detectors for Alice and 2 for Bob, 4 total. That way there is no question. I have actual data but it is in raw form. I expect it will be a while before I have much to share with everyone.

In the meantime, I can tell you that Peter Morgan has done a lot of analysis on the same data. He has not looked for that specific thing, but very very close. He analyzed delay characteristics for anomalies. There were some traces, but they were far far too weak to demonstrate a Fair Sampling issue. Peter has not published his result yet, as I recall.
 
  • #1,025
my_wan said:
{RED} - The question specifically avoided any detection failures whatsoever. It has no bearing whatsoever on the question, but ok. Except that you got less specific in blue for some reason.
{BLUE} - Here when you say "would support", by what model would the "would support" qualify? In fact one of the many assumptions contained in "would support" qualifier involves how you chose define the equivalence of simultaneity between two spatially separated time intervals. Yet with "would support" you are tacitly requiring a whole range of assumptions to be the a priori truth. It logically simplifies to the statement: It's true because I chose definitions to make it true.

If there are no detection failures, then where is the sampling coming into play? You have detected all there are to detect!

As to the supporting idea: obviously, if there is no bias, then you get the same conclusion whether you look at the sample or the universe. If you push LR, then you are saying that an unusual number of "pro QM" pairs are detected and/or an unusual number of "pro LR" pairs are NOT detected. (Except that relationship varies all over the place.) So I guess I don't see what that has to do with assumptions. Just seems obvious that there must be a bias in the collection if the hypothesis is to be tenable.
 
  • #1,026
my_wan said:
By recording and comparing the actual time stamps of the uncorrelated photon detections, then, if this is happening, the time window spread between near correlated photon pairs should statistically appear to increase as the angle difference was increased. If pairs of uncorrelated detections were truly uncorrelated, then there should be no statistical variance in the timing of the pairs of time stamps. The assumption that they are correlated, even when not measured to be so, is what would make such a statistical variance possible. May be worth investigating experimentally. Pre-existing raw data might be sufficient, depending on whether time stamps were recorded, or merely hit/misses recorded.

Some work has been done towards this, and the results appear to support your suspicion. See Appendix A of this paper (starting at page 19 http://arxiv.org/abs/0712.2565, J. Phys. Soc. Jpn. 76, 104005 (2007))
 
  • #1,027
billschnieder said:
Some work has been done towards this, and the results appear to support your suspicion. See Appendix A of this paper (starting at page 19 http://arxiv.org/abs/0712.2565, J. Phys. Soc. Jpn. 76, 104005 (2007))

We are all working off the same dataset and this is a very complicated subject. But there is not the slightest evidence that there is a bias sufficient to account for a LR result. So, no, there is no basis - at this time - for my_wan's hypothesis. However, it is my intent to document this more clearly. As I mention, it is very complicated and not worth debating without going through the whole process from start to finish. Which is a fairly massive project.

All of the teams looking at this are curious as to whether there might be a very small actual bias. But if it is there, it is small. But any at all could mean a potential new discovery.
 
Last edited:
  • #1,028
DrChinese said:
If there are no detection failures, then where is the sampling coming into play? You have detected all there are to detect!
Oh, so I was fully justified in thinking I was being presumptuous in taking your "yes" answer at face value.

Whether or not a coincidence is detected is independent of whether or not a photon is detected. Coincidence detection is wholly dependent on the time window, while photon detection is dependent only on detecting the photon at 'any' proper time. Thus, in principle, a correlation detection failure can occur even when both correlated photons are detected.

This is a bias, and qualifies as a "fair sampling" argument even when 100% of the photons are detected.

DrChinese said:
As to the supporting idea: obviously, if there is no bias, then you get the same conclusion whether you look at the sample or the universe. If you push LR, then you are saying that an unusual number of "pro QM" pairs are detected and/or an unusual number of "pro LR" pairs are NOT detected. (Except that relationship varies all over the place.) So I guess I don't see what that has to do with assumptions. Just seems obvious that there must be a bias in the collection if the hypothesis is to be tenable.
Again, I am specifically specifying a 100% detection rate. The "bias" is only in the time window in which those detections take place, not the failure of 'photon' detection, a failure of time synchronization to qualify a correlated pair of detections as correlated.

This is to illustrate the invalidity of applying empirical invalidity of one type of "fair sampling" bias to all forms of "fair sampling" bias. It's not a claim of a LR solution to BI violations. It may be a worthy investigation though, because it appears to have empirical consequences that might be checked.

Any number of mechanisms can lead to this, such as the frame dependence of simultaneity, a change in the refractive index of polarizers as the angle changes relative to a photons polarization, etc. The particular mechanism is immaterial to testing for such effects, and immaterial to the illegitimacy of assuming the lack of legitimacy of missing photon detections automatically rule out missing correlation detections even when both photon detections are recorded. Missing a time window to qualify as a correlation detection is an entirely separate issue from missing a photon detection.

DrChinese said:
That is actually what I want to do in order to demonstrate the difficulties involved with the Unfair Sampling Hypothesis. There should be a pattern to the bias if it is tenable.

The time stamps are recorded at each station whenever a detection is made. There are 2 detectors for Alice and 2 for Bob, 4 total. That way there is no question. I have actual data but it is in raw form. I expect it will be a while before I have much to share with everyone.

In the meantime, I can tell you that Peter Morgan has done a lot of analysis on the same data. He has not looked for that specific thing, but very very close. He analyzed delay characteristics for anomalies. There were some traces, but they were far far too weak to demonstrate a Fair Sampling issue. Peter has not published his result yet, as I recall.
This is interesting, and a logical next step for EPR issues. I haven't got such raw data. Perhaps, if the statistical signature is too weak, better constraints could be derived by a variance across various specific settings. The strongest correlations should occur the nearer the two detector settings are the same, but must differ some, in the ideal case, to get uncorrelated photon detection sets to compare. The variance of 'near hit' time stamps should increase as the relative angle increases. It would be useful to rule this out, but invalidating a fair sampling bias that involves missing detections doesn't do it. It still falls withing the "fair sampling" class of models, which is the main point I wanted to make.
 
  • #1,029
DevilsAvocado said:
You could see it this way. You could also see it as the very tricky nature then has to be wobbling between "increasing/decreasing" unfair sampling, which to me makes the argument for fair sampling even stronger...

DrChinese said:
So true. Seems sort of strange to suggest that some photons are over-counted and some are under-counted... and that depends on the angle between. And whether they support LR or QM as to whether they are over or under counted.

my_wan said:
Physically it's exactly equivalent to a tennis ball being bounced off a wall taking a longer route back as the angle it hits the wall increases. It only requires the assumption that the more offset a polarizer is the longer it takes the photon to tunnel through it. Doesn't really convince me either without some testing, but certainly not something I would call nature being tricky. At least not any more tricky than even classical physics is known to be at times. Any sufficiently large set of dependent variables are going to be tricky, no matter how simple the underlying mechanisms. Especially if it looks deceptively simple on the surface.


my_wan & DrChinese, wouldn’t a very simple (almost silly) way of ruling out all these questions around the possible 'weaknesses' in the setup (angles / time window / etc), be to run one tests with not entangled pairs?

If there’s something 'wrong', the same biases must logically show for 'normal' photons also, right...??


(And if I’m right – this has of course already been done.)
 
  • #1,030
DrChinese said:
The De Raedt simulation is an attempt to demonstrate that there exists an algorithm whereby (Un)Fair Sampling leads to a violation of a BI - as observed - while the full universe does not (as required by Bell).


I have only done a Q&D inspection of the code, and I’m probably missing something here, but to me it looks like angle2 is always at a fixed (user) offset to angle1:

angle2 = angle1 + Radians(Theta) ' fixed value offset always


Why? It would be fairly easy to save two independently random angles for angle1 & angle2 in the array for result, and after the run sort them out for the overall statistics...

Or, are you calling the main function repeatedly with different random values for argument Theta...? If so, why this solution...?

Or, is angle2 always at a fixed offset of angle1? If so, isn’t this an extreme weakness in the simulation of the "real thing"??
 
  • #1,031
DevilsAvocado said:
my_wan & DrChinese, wouldn’t a very simple (almost silly) way of ruling out all these questions around the possible 'weaknesses' in the setup (angles / time window / etc), be to run one tests with not entangled pairs?

If there’s something 'wrong', the same biases must logically show for 'normal' photons also, right...??


(And if I’m right – this has of course already been done.)
There may be ways to check specific mechanisms, like refractive index, but in this case the bias is not presumed to miss any photon detections. The only bias is in the time window that determines whether we define two correlated photons to be correlated or not. Thus the general case, involving how we test correlations, can only be tested with photons we can reasonably assume are correlated.

You might also try passing femtosecond pulses through a polarizer and checking how it effects the spread of the pulse. A mechanism may also involve something similar to squeezed light, which, due to the Uncertainty Principle, maximizes the uncertainty of measurables. The photons with the largest offsets relative to the polarizer may effectively be squeezed more, thus inducing a spread, higher momentum uncertainty, in the output.

Still, a general test must involve correlations, and mechanisms can be investigated once an effect is established. Uncorrelated photon sources may, in some cases, be able to test specific mechanism, but not the general case involving EPR correlation test.
 
  • #1,032
DevilsAvocado said:
my_wan & DrChinese, wouldn’t a very simple (almost silly) way of ruling out all these questions around the possible 'weaknesses' in the setup (angles / time window / etc), be to run one tests with not entangled pairs?

If there’s something 'wrong', the same biases must logically show for 'normal' photons also, right...??


(And if I’m right – this has of course already been done.)

Pretty much all of these variations are run all the time, and there is no hint of anything like this. Unentangled and entangled photons act alike, except for correlation stats. This isn't usually written up because it is not novel or interesting to other scientists - ergo not too many papers to cite on it. I wouldn't publish a paper saying the sun came up this morning either.

You almost need to run variations with and without entanglement to get the apparatus tuned properly anyway. And it is generally pretty easy to switch from one to the other.
 
  • #1,033
my_wan said:
There may be ways to check specific mechanisms, like refractive index, but in this case the bias is not presumed to miss any photon detections. The only bias is in the time window that determines whether we define two correlated photons to be correlated or not. Thus the general case, involving how we test correlations, can only be tested with photons we can reasonably assume are correlated.

You might also try passing femtosecond pulses through a polarizer and checking how it effects the spread of the pulse. A mechanism may also involve something similar to squeezed light, which, due to the Uncertainty Principle, maximizes the uncertainty of measurables. The photons with the largest offsets relative to the polarizer may effectively be squeezed more, thus inducing a spread, higher momentum uncertainty, in the output.

Still, a general test must involve correlations, and mechanisms can be investigated once an effect is established. Uncorrelated photon sources may, in some cases, be able to test specific mechanism, but not the general case involving EPR correlation test.

I don't see why you say that uncorrelated sources cannot be used in the general case. I think that should not be an issue, as you can change from uncorrelated to correlated almost at the flip of an input polarizer setting.
 
  • #1,034
DevilsAvocado said:
I have only done a Q&D inspection of the code, and I’m probably missing something here, but to me it looks like angle2 is always at a fixed (user) offset to angle1:

angle2 = angle1 + Radians(Theta) ' fixed value offset always


Why? It would be fairly easy to save two independently random angles for angle1 & angle2 in the array for result, and after the run sort them out for the overall statistics...

Or, are you calling the main function repeatedly with different random values for argument Theta...? If so, why this solution...?

Or, is angle2 always at a fixed offset of angle1? If so, isn’t this an extreme weakness in the simulation of the "real thing"??


I wanted to graph every single degree from 0 to 90. Since it is a random test, it doesn't matter from trial to trial. I wanted to do X iterations for each theta, and sometimes I wanted fixed angles and sometimes random ones. The De Raedt setup sampled a little differently, and I wanted to make sure that I could see clearly the effect of changing angles. A lot of their plots did not have enough data points to suit me.
 
  • #1,035
DrChinese said:
I don't see why you say that uncorrelated sources cannot be used in the general case. I think that should not be an issue, as you can change from uncorrelated to correlated almost at the flip of an input polarizer setting.

Of course I constantly switch between single beams that are polarized and unpolarized, as well as mixed polarizations, and from polarizer setting to stacked polarizers, to counterfactual assumptions with parallel polarizers, to correlations in EPR setups. Of course you can compare correlated and uncorrelated cases.

The problem is in the correlation case involving variances in time windows to establish such a correlation exist, any corresponding effect in uncorrelated cases is very dependent on the specific mechanism, realistic or not, inducing the time widow offsets in the EPR case. Thus testing time window variances in correlation detections test for the existence of the effect independent of the mechanism inducing such an effect. Testing uncorrelated beam cases can only test very specific sets of mechanism in any given test design. It may be there in the uncorrelated beam case, but I would want to know there was an effect to look for before I went through a myriad of uncorrelated beam test to search for it.

It's not unlike the PEAR group at Princeton deciding that they should investigate the applications of telekinesis without actually establishing such an effect exist. At least non-realist have a real effect to point at, and a real definition to work with. :biggrin:
 
  • #1,036
my_wan said:
Oh, so I was fully justified in thinking I was being presumptuous in taking your "yes" answer at face value.

Whether or not a coincidence is detected is independent of whether or not a photon is detected. Coincidence detection is wholly dependent on the time window, while photon detection is dependent only on detecting the photon at 'any' proper time. Thus, in principle, a correlation detection failure can occur even when both correlated photons are detected.

This is a bias, and qualifies as a "fair sampling" argument even when 100% of the photons are detected.

OK, sure, I follow now. Yes, assuming 100% of all photons are detected, you still must pair them up. And it is correct to say that you must use a rule of some kind to do this. Time window size then plays a role. This is part of the experimentalist's decision process, that is true. And in fact, this is what the De Raedt model uses as its exploit.

I don't think there is too much here, but again I have not personally looked at the dataset (I am still figuring out the data format and have been too busy to get that done in past weeks). But I will report when I have something meaningful.

In the meantime, you can imagine that there is a pretty large time interval between events. In relative terms, of course. There may be 10,000 pairs per second, so perhaps an average of 10-100 million picoseconds between events. With a window on the order of 20 ps, you wouldn't expect to have a lot of difficulty pairing them up. On the other hand, as the window increases, you will end up with pairs that are no longer polarization entangled (because they are distinguishable in some manner). Peter tells me that a Bell Inequality is violated with windows as large as 100 ps.

My point being - again - that it is not so simple to formulate your hypothesis in the presence of so many existing experiments. The De Raedt simulation shows how hard this really is.
 
  • #1,037
DrChinese said:
I don't think there is too much here, but again I have not personally looked at the dataset (I am still figuring out the data format and have been too busy to get that done in past weeks). But I will report when I have something meaningful.
I have significant doubts myself, for a myriad of reasons. Yet still having the actual evidence in hand would be valuable. The point in the debate here was the lagitamacy of applying the "fair sampling" no-go based on photon detections to time windows correlation coupling, which is also a form of "fair sampling", i.e., over-generalizing a no-go.

Peter tells me that a Bell Inequality is violated with windows as large as 100 ps.
That, I believe, is about a 0.03 cm spread at light speed. I'm also getting a 100 million picosecond average between events, which you need significantly less than that time window to avoid significant errant correlation counts. Still, correlations being effectively washed out beyond 100 ps, about 1 millionth of the average event spread, seems awfully fast. I need to recheck my numbers. What's the wavelength of the light in this dataset?

It might be interesting to consider only the misses in a given dataset, with tight initial time window constraints, and see if a time window shift will recover a non-random percentage of correlations in just that non-correlated subset. That would put some empirical constraints on photon detection timing variances, for whatever physical reason. That correlations would be washed out averaged over enough variation is not surprising, my napkin numbers look a bit awkward.

DrChinese said:
My point being - again - that it is not so simple to formulate your hypothesis in the presence of so many existing experiments. The De Raedt simulation shows how hard this really is.
Yes it is hard, and interesting. I'm not one to make claims about what De Raedt's simulation actually mean. I'm only warning against presumptions about meaning based on over-generalizations of the constraints we do have, for the very difficulties stated. This was the point of formulating a "fair sampling" argument that explicitly avoided missing photon detections. It wasn't to claim a classical resolution to BI violations, but this caution also applies to the meaning of BI violations.
 
  • #1,038
DrChinese said:
Pretty much all of these variations are run all the time, and there is no hint of anything like this. Unentangled and entangled photons act alike, except for correlation stats. This isn't usually written up because it is not novel or interesting to other scientists - ergo not too many papers to cite on it. I wouldn't publish a paper saying the sun came up this morning either.

You almost need to run variations with and without entanglement to get the apparatus tuned properly anyway. And it is generally pretty easy to switch from one to the other.

my_wan said:
There may be ways to check specific mechanisms, like refractive index, but in this case the bias is not presumed to miss any photon detections. The only bias is in the time window that determines whether we define two correlated photons to be correlated or not. Thus the general case, involving how we test correlations, can only be tested with photons we can reasonably assume are correlated.

my_wan said:
Pre-existing raw data might be sufficient, depending on whether time stamps were recorded, or merely hit/misses recorded.


Thanks guys. It looks like the sun will come up tomorrow as well. :smile:

By this we can draw the conclusion that all talk about "unfair angles" is a dead-end. All angles are treating every photon alike, whether it’s entangled or not.

I’ve found this (not peer reviewed) paper that thoroughly examine wide and narrow window coincidences on raw data from EPR experiments conducted by Gregor Weihs and colleagues, with tests on window sizes spanning from 1 ns to 75,000 ns using 3 different rules for identifying coincidences:

http://arxiv.org/abs/0906.5093"

A Close Look at the EPR Data of Weihs et al
James H. Bigelow
(Submitted on 27 Jun 2009)

Abstract: I examine data from EPR experiments conducted in 1997 through 1999 by Gregor Weihs and colleagues. They used detection windows of 4-6 ns to identify coincidences; I find that one obtains better results with windows 40-50 ns wide. Coincidences identified using different windows have substantially different distributions over the sixteen combinations of Alice's and Bob's measurement settings and results, which is the essence of the coincidence time loophole. However, wide and narrow window coincidences violate a Bell inequality equally strongly. The wider window yields substantially smaller violations of no-signaling conditions.
 
Last edited by a moderator:
  • #1,039
DrChinese said:
I wanted to graph every single degree from 0 to 90. Since it is a random test, it doesn't matter from trial to trial. I wanted to do X iterations for each theta, and sometimes I wanted fixed angles and sometimes random ones. The De Raedt setup sampled a little differently, and I wanted to make sure that I could see clearly the effect of changing angles. A lot of their plots did not have enough data points to suit me.

Ahh! That makes sense! Thanks.
 
  • #1,040
DevilsAvocado said:
Thanks guys. It looks like the sun will come up tomorrow as well. :smile:

By this we can draw the conclusion that all talk about "unfair angles" is a dead-end. All angles are treating every photon alike, whether it’s entangled or not.
Not sure what an unfair angle is, but valid empirical data from any angle is not unfair :-p

DevilsAvocado said:
I’ve found this (not peer reviewed) paper that thoroughly examine wide and narrow window coincidences on raw data from EPR experiments conducted by Gregor Weihs and colleagues, with tests on window sizes spanning from 1 ns to 75,000 ns using 3 different rules for identifying coincidences:
http://arxiv.org/abs/0906.5093

A Close Look at the EPR Data of Weihs et al
James H. Bigelow
(Submitted on 27 Jun 2009)

Abstract: I examine data from EPR experiments conducted in 1997 through 1999 by Gregor Weihs and colleagues. They used detection windows of 4-6 ns to identify coincidences; I find that one obtains better results with windows 40-50 ns wide. Coincidences identified using different windows have substantially different distributions over the sixteen combinations of Alice's and Bob's measurement settings and results, which is the essence of the coincidence time loophole. However, wide and narrow window coincidences violate a Bell inequality equally strongly. The wider window yields substantially smaller violations of no-signaling conditions.
Very cool! This is something new to me :!)

A cursory glance and it's already intensifying curiosity. I don't even want to comment on the information in the abstract until I get a chance to review it more. Interesting :smile:
 
  • #1,041
DrChinese said:
OK, sure, I follow now. Yes, assuming 100% of all photons are detected, you still must pair them up. And it is correct to say that you must use a rule of some kind to do this. Time window size then plays a role. This is part of the experimentalist's decision process, that is true.

Even if you succeeded in pairing them up, It is not sufficient to avoid the problem I explained in post #968 here: https://www.physicsforums.com/showpost.php?p=2783598&postcount=968

A triplet of pairs extracted from a dataset of pairs, is not guaranteed to be equivalent to a triplet of pairs extracted from a dataset of triples. Therefore, even for a 100% detection with perfect matching of the pairs to each other, you will still not obtain a fair sample, fair in the sense that the terms within the CHSH inequality will correspond to the terms calculated from the experimental data.

I have shown in the above post, how to derive Bell's inequalities from a dataset of triples without any other assumptions. If a dataset of pairs can be used to generate the terms in the inequality, it should be easy to derive Bell's inequalities based only on the assumption that we have a dataset of pairs and anyone is welcome to try to do that -- it is not possible.
 
  • #1,042
DrC,
Maybe this will reconcile some incongruencies in my napkin numbers. Is the dataset you spoke of from Weihs et al (1998) and/or (2007)? This would put the time windows in the ns range. The Weihs et al data was drawn from settings that were randomly reset every 100 ns, with 14 ns of data collected in the switching time discarded.

The data, as presented by the preprint DevilsAvocado posted a link to, in spite of justifying certain assumptions about detection time variances, still has some confounding features I'm not clear on yet.

I may have to get this raw data from Weihs et al and put it in some kind of database in raw form, so I can query the results of any assumptions.
 
  • #1,043
my_wan said:
Not sure what an unfair angle is, but valid empirical data from any angle is not unfair :-p
Sorry, I meant unfair "tennis ball"... :-p

my_wan said:
Very cool! This is something new to me :!)
Yeah! It looks interesting!

my_wan said:
I may have to get this raw data from Weihs et al and put it in some kind of database in raw form, so I can query the results of any assumptions.
Why not contact http://www.rand.org/about/people/b/bigelow_james_h.html" ? He looks like a nice guy:

[PLAIN]http://www.rand.org/about/people/photos/b/bigelow_james_h.jpg

With a dead serious occupation in computer modeling and analysis of large data files, and modeling of training in Air Force fighter pilots! (+ Ph.D. in operations research + B.S. in mathematics)

(= As far as you can get from Crackpot Kracklauer! :smile:)
 
Last edited by a moderator:
  • #1,044
billschnieder said:
Even if you succeeded in pairing them up, It is not sufficient to avoid the problem I explained in post #968 here: https://www.physicsforums.com/showpost.php?p=2783598&postcount=968

A triplet of pairs extracted from a dataset of pairs, is not guaranteed to be equivalent to a triplet of pairs extracted from a dataset of triples. Therefore, even for a 100% detection with perfect matching of the pairs to each other, you will still not obtain a fair sample, fair in the sense that the terms within the CHSH inequality will correspond to the terms calculated from the experimental data.

I have shown in the above post, how to derive Bell's inequalities from a dataset of triples without any other assumptions. If a dataset of pairs can be used to generate the terms in the inequality, it should be easy to derive Bell's inequalities based only on the assumption that we have a dataset of pairs and anyone is welcome to try to do that -- it is not possible.

I re-read your post, and still don't follow. You derive a Bell Inequality assuming realism. Same as Bell. So that is a line in the sand. It seems that should be respected with normal triples. Perhaps if you provide a dataset of triples from which doubles chosen randomly would NOT represent the universe - and the subsample should have a correlation rate of 25%. That would be useful.
 
  • #1,045
my_wan said:
... put it in some kind of database in raw form

If you do get your hands on the raw data, maybe you should consider using something that’s a little more "standard" than AutoIt...?

One alternative is MS Visual Studio Express + MS SQL Server Express, which are both completely free. You will get the latest top of the line RAD environment, including powerful tools for developing Windows applications, and extensive help in IntelliSense (autocompletion), etc.

The language in Visual Basic Express has many similarities to the BASIC language in AutoIt.

SQL Server Express is a very powerful SQL DBMS, with the only real limit in the 10GB db size (compared to standard versions).

VSE20101.png



"[URL Visual Studio 2010 Express

[URL]http://www.microsoft.com/express/s/img/logo_VSE2010.png[/URL]


"[URL SQL Server 2008 R2 Express

[URL]http://www.microsoft.com/express/s/img/logo_SQL2008R2.png[/URL]


(P.S. I’m not employed by MS! :wink:)
 
Last edited by a moderator:
  • #1,046
I run MySql (not MS), and Apache with PHP. I used to run Visual Studios (before dot net) and tried the express version. Don't really like it, though it was easy enough to program.

MySql would be fine, and will work with AutoIt, Dos, or PHP.

These days I use AutoIt a lot simply because I can rapidly write anything I want, with or without a GUI, and compile or just run the script. It doesn't even have to be installed with an installer. It's functions can do things that takes intense effort working with API's to do in lower level languages, including Visual Studios. I can do things in an hour that would take me weeks in most other languages, and some things I've never figured out how to do in other languages. I've also run it with Linux under wine. Not the fastest or most elegant, but suits my needs so perfectly 99.9% of the time. Effectively impossible to hide source code from any knowledgeable person, but not something that concerns me.
 
  • #1,047
my_wan said:
suits my needs so perfectly 99.9% of the time
Okidoki, if you say so.
my_wan said:
impossible to hide source code
My thought was to maybe make it easier to show the code to the world, if you find something very interesting... :rolleyes:
 
  • #1,048
DrChinese said:
You derive a Bell Inequality assuming realism. Same as Bell.
Absolutly NOT! There is no assumption about locality or realism. We have a dataset from an experiment, in which we collected data for three boolean variables x, y, z. That is, each data point consists of 3 values one each for x, y and z, with each value either 1 or 0. We could say, our dataset is (xyzi, i=1..n). Our task is then to derive inequalities which sums of products of pairs extracted from this dataset of triplets must obey. From our dataset we can generate pair products (xy, yz, xz). Note that there is no mention of the type of experiment, it could be anything, a poll in which we ask three (yes,no) question, or the EPR situation. We completely divorce ourselves from the physics or specific domain of the experiment and focus only on the mathematics. Note also, that there is no need for randomness here, we are using the full universe of the dataset to obtain our pair products. We do that and realize that the inequalities obtained are Bell-like. That is all there is to it.

The question then is, if a dataset violates these inequalities, what does it mean? Since there was no physical assumptions in their derivation, violation of the inequalities must mean ONLY that the dataset which violates the inequalities is not mathematically compatible with the dataset used to generate the inequalities.

The example I presented involving doctors and patients, shows this clearly.

Perhaps if you provide a dataset of triples from which doubles chosen randomly would NOT represent the universe - and the subsample should have a correlation rate of 25%. That would be useful.
I'm not sure you understand the point yet. The whole point is to show that any pairs extracted from a dataset of triples MUST obey the inequalities, but pairs from a dataset of just pairs will not! So your request is a bit strange. Do you agree that in Aspect type experiments, no triples are ever collected, only pairs? In other words, each data point consists of only two values of dichotomous variables, not three?

This shows that there is a simple mathematical reason why Bell-type inequalities are violated by experiments, which has nothing to do with locality or "realism" -- ie, Bell's inequalities are derived assuming values per data point (a,b,c), however in experiments only pairs are ever measured (a,b), therefore the dataset from the experiments is not mathematically compatible with the one assumed in Bell's derivation.

So if you are asking for a dataset of pairs which violates the inequalities, I will simply point you to all Bell-test experiments ever done in which only pairs of data points were obtained, which amounts to 100% of them.

Now, you could falsify the above in the following way:
1) Provide an experiment in which triples were measured and Bell's inequalities were violated
2) OR, derive an inequality assuming a dataset of pairs right from the start, and show that experiments still violate it

EDIT:
DrC,
It should be easy for you to simulate this and verify the above to be true since you are a software person too.
 
Last edited:
  • #1,049
Not to deliberately pull this thread in a different direction, but another thing to consider when discussing whether action at a distance is possible is the effect that multiple universe theory has on this concept.

I'm not an expert, but from what I have gathered about MU is that each time an entanglement is formed, each universe contains a matched set of entangled particles (i.e. in the universe where particle A is produced with spin 'up', entangled particle B will have spin 'down'). Since all possible outcomes are produced in correlation with the probability of the outcome, there will necessarily be universes with each possible 'combination' of entangled pairs. Then when we measure the entangled attributes of one of the particles, we are not actually having any effect on the other entangled particle at all. The 'effect' is local, and is on us, the observer, as we are now cut off from the other universes that contain the other results. So, for example, since we now observe only the universe where particle A has spin 'up', we know that when we observe entangled particle B (or someone else in our observable universe observes particle B and passes the information to us) we will see the complimentary measurement to our observed particle A.

So, in this theory, no spooky action at a distance, just local interaction between the observer and the observed particle, which occurs at lightspeed or slower.
 
  • #1,050
DougW said:
Not to deliberately pull this thread in a different direction, but another thing to consider when discussing whether action at a distance is possible is the effect that multiple universe theory has on this concept.
Why consider it? The multiverse is just an ontological construct crafted so as not to actually say anything new (provide any actual physics), that QM doesn't already contain. So if were considering the set of all possible models consistent with QM, for the purposes of BI, then QM already covers this special case. Unless you can say exactly what it entails in terms of BI in a relevant way.
 

Similar threads

Replies
45
Views
3K
Replies
4
Views
1K
Replies
18
Views
3K
Replies
6
Views
2K
Replies
2
Views
2K
Replies
100
Views
10K
Back
Top