Is action at a distance possible as envisaged by the EPR Paradox.

Click For Summary
The discussion centers on the possibility of action at a distance as proposed by the EPR Paradox, with participants debating the implications of quantum entanglement. It is established that while entanglement has been experimentally demonstrated, it does not allow for faster-than-light communication or signaling. The conversation touches on various interpretations of quantum mechanics, including the Bohmian view and many-worlds interpretation, while emphasizing that Bell's theorem suggests no local hidden variables can account for quantum predictions. Participants express a mix of curiosity and skepticism regarding the implications of these findings, acknowledging the complexities and ongoing debates in the field. Overall, the conversation highlights the intricate relationship between quantum mechanics and the concept of nonlocality.
  • #481
This is a fascinating debate although I must admit it is difficult to follow at times. my_wan's arguments appear very deep and well thought out, but I think I'm missing the requisite philosophical training to appreciate fully his viewpoint. However the exchanges between my_wan and DrChinese are very educational and I thank them for their efforts here in enlightening the subtle issues at the core of the EPR debate. :)

Earlier, I suggested a scientific experiment that would help settle this one way or the other, since as I understand it my_wan's explanation for the non-local correlations in entanglement would require that the correlations are instantaneous.

If we can demonstrate any delay in the entanglement correlations would that not rule out the relational theory of QM or the existence of fundamental probabilistic elements of reality (probabilistic realism)?

In principle it may be possible to construct a quantum computer which could record the time of qubit switching for certain qubits, although we would have to factor out the limit on qubit switching speed imposed by the uncertainty principle (mentioned previously)

Alternatively it may be possible to demonstrate a delay in Aspect type experiments by refining the timing and precision of the switching apparatus until it reaches a switching speed so fast that we can observe a reduction in entanglement effects (as we reached the threshold for the FTL signalling mechanism I proposed earlier we would expect entanglement effects to gradually fail). This would be tricky with the original Aspect setup, since we would have to switch the deflectors very precisely almost as the photons were about to hit them (since remember we are looking for a faster than light signalling mechanism between the entangled photons)
 
Last edited:
Physics news on Phys.org
  • #482
my_wan said:
...I read your negative probabilities page at:
http://www.drchinese.com/David/Bell_Theorem_Negative_Probabilities.htm
I was thinking in terms of the of a given value E(a,b) from possible outcomes P(A,B|a,b) in the general proof of Bell's theorem. You had something else in mind.

What you have, at your link, is 3 measurements at angles A=0, B=67.5, and C=45. A and B are actual measurements where C is a measurement that could have been performed at A or B, let's say B in this case. This does indeed lead to the given negative probabilities, if you presume that what you measured at B cannot interfere with what you could have measured at C, had you done the 3 measurements simultaneously. The counterfactual reasoning is quoted: "When measuring A and B, C existed even if we didn't measure it."

So where do the negative probability come from here?...

So the page quote: "When measuring A and B, C existed even if we didn't measure it." Not when some subset of the particles, when measures are performed separately, are measured by both B and C. Thus when you consider simultaneous measures, at these detectors, the same particles must be detected twice by both B and C simultaneously to be counterfactually consistent with the separate measures.

Now I know this mechanism can account for interference in counterfactual detection probabilities, but you can legitimately write it off until the sine wave interference predicted by QM is quantitatively modeled by this interference mechanism. But I still maintain the more limited claim that the counterfactual reasoning contained in the quote: "When measuring A and B, C existed even if we didn't measure it" is falsified by the fact that the same particles cannot simultaneously be involved in detections at B and C. Yet it still existed, at one or the other detector, just not both. Probability interference is a hallmark of QM.

OK, there are a couple of issues. This is indeed a counterfactual case. There are only 2 readings, not 3, so you have that correct.

As to interference: yes, you must consider the idea that there is a connection between Alice and Bob. But NOT in the case that there is local realism. In that case - which is where the negative probabilities come from - there is no such interaction. QM would allow the interaction, but explicit denies the counterfactual case as existing. Because it is not a Realistic theory.
 
  • #483
unusualname said:
Earlier, I suggested a scientific experiment that would help settle this one way or the other, since as I understand it my_wan's explanation for the non-local correlations in entanglement would require that the correlations are instantaneous.

If we can demonstrate any delay in the entanglement correlations would that not rule out the relational theory of QM or the existence of fundamental probabilistic elements of reality (probabilistic realism)?

In principle it may be possible to construct a quantum computer which could record the time of qubit switching for certain qubits, although we would have to factor out the limit on qubit switching speed imposed by the uncertainty principle (mentioned previously)

Alternatively it may be possible to demonstrate a delay in Aspect type experiments by refining the timing and precision of the switching apparatus until it reaches a switching speed so fast that we can observe a reduction in entanglement effects (as we reached the threshold for the FTL signalling mechanism I proposed earlier we would expect entanglement effects to gradually fail). This would be tricky with the original Aspect setup, since we would have to switch the deflectors very precisely almost as the photons were about to hit them (since remember we are looking for a faster than light signalling mechanism between the entangled photons)

Scientists would in fact love to answer this question. Experiments have been done to the edge of current technology, and no limit has been found yet up to 10,000 times c. So I expect additional experiments as time goes on. If I see anything more on this, I will post it.
 
  • #484
ThomasT said:
One can't get much more unscientific, or nonscientific, than to posit that Nature is fundamentally nonlocal.

Not only is it "scientific" to posit that Nature is fundamentally nonlocal, it is also the only "logical" thing to do. That is, we know that the physical space of our universe consists of three dimensions. Through pure force of reasoning, therefore, we should expect that the elements that constitute a physical reality such as ours are fundamentally spatial in nature. It is for this reason that Erwin Schrodinger posited the existence of a mathematically defined, dynamical object that can be understood--for lack of a better phrase--as an "ontological unity".

The main problem here, though, is that physics had never before been in a position to come to terms with the necessarily space-occupying nature of elemental reality. And this is indeed *necessary* because a three-dimensional universe that consists only of purely local (i.e. zero-dimensional) objects is simply a void. That is, all objects that are anything less than three-dimensional will occupy precisely a zeroth of the space of the universe. In other words, it only makes sense to understand that the parts of a three-dimensional universe are themselves three-dimensional.

But the reason why locality is taken so seriously by certain "naive" individuals is because the entire course of physics since the time of Galileo (up to the 20th century) has been simply to chart the trajectories of empirical bodies through "void" space rather than to come to terms with the way in which any such experience of physical seperateness is at all possible.

So, we can now understand Newton's famous hypothesis non fingo as an implicit acknowledgment that the question of the "true nature" of physical reality is indeed an interesting/important question, but that his particular job description at Cambridge University did not give him any reason to depart from the [nascent] tradition of physics as empirical prediction rather than ontological description.

But given the rise of Maxwellian "field type" theories in the 19th century, the question of the space-filling quality of elemental matter could not be ignored for much longer. It is for this reason that ether theories came into prominence. So by the early 1900's, there was an urgent need to find a resolution between the manifestly continuous aspects and granular aspects of physical experience.

This resolution was accomplished by way of the logical "quantization" of the electromagnetic continuum, giving a way for there to be a mathematical description for the way in which atoms are able to interact with one another. That is, photons are taken to be "radiant energy particles" that are able to cross the "void" that separates massive bodies. So, we must understand that the desire to understand energy in a quantitative way was nothing other than a continuation of the Newtonian project of developing theories of a mathematically analytical nature, rather than a break from classical Newtonian thought. That is, the *real* break from the classical model is Maxwell's notion that there is a continuous "something" that everywhere permeates space. One implication of this way of thinking is that this "something" is the only "ontologically real thing," and that all experiences of particularity are made possible by modulations of continuous fields.

The reason why there is so much difficulty in coming into a physical theory that attains the status of being a compelling, "ontologically complete" model is that there is always a desire on the parts of human beings to be able to predict phenomena--that is, to be "certain" about the future course of events. And our theories reflect this desire by way of being reduced to trivially solvable mathematical formulations (i.e. differential equations of a single independent variable) rather than existing in formulations whose solutions are anything but apparent (i.e. partial differential equations of several independent variables).

So, we can now understand that Schrodinger's idea of reality as consisting of harmonically oscillating, space filling waveforms raised an extremely ominous mathematical spectre--which was summarily overcome by way of the thought of the psi function as a "field of probabilities" that can be satisfactorily "reduced" by way of applying Hermitian operators (i.e. matrices of complex conjugates) to it.

But now, we can see that Schrodinger's conceptually elegant ontological solution has been replaced by a purely logical formalism that is not meant to have any ontological significance. That is, the system of equations that can be categorized under the general heading of "quantum mechanics" is only meant to be a theory of empirical measurement, rather than a theory that offers any guidance as regards "what" it is that is "really" happening when any experimental arrangement registers a result.

So, if there is anyone who is searching for, shall we say, "existential comfort" as regards the nature of the "stuff" of physical reality, you are setting yourself up for major disappointment by looking towards the mainstream academic physics establishment (with physicsforums being its best online representative). Your best bet would probably be to pick up a book by or about Erwin Schrodinger, the man, rather than a book that merely uses his name in its exposition of the pure formalism that is quantum mechanics.

And other than that, I am doing my best to continue the tradition of pushing towards a thoroughly believable ontological theory of physical reality here at physicsforums.
 
  • #485
DrChinese said:
OK, there are a couple of issues. This is indeed a counterfactual case. There are only 2 readings, not 3, so you have that correct.

As to interference: yes, you must consider the idea that there is a connection between Alice and Bob. But NOT in the case that there is local realism. In that case - which is where the negative probabilities come from - there is no such interaction. QM would allow the interaction, but explicit denies the counterfactual case as existing. Because it is not a Realistic theory.

To the assertion:"NOT in the case that there is local realism":
In the local realism assumption, the connection between Alice and Bob is carried by the particles as inverse local properties, and read via statistical coincidence responses to polarizers with various settings. Since Alice and Bob are the emitted particle pairs, C is not Charlie, but a separate interrogator of Bob asking for Bob's identity papers. In any reasonable experimental construction, B and counterfactual C, either B or C gets to Bob first to interrogate his identity. But whichever interrogates Bob first interferes with the other getting to interrogate Bob also. This is a requirement of local realism.

Thus Alice and Bob are the particles emitted, not A, B, and C interrogators (polarizers) that you choose to interrogate Alice and Bobs identity with, nor the singular arbitrary settings A, B, and C, used to interpret Alice and Bobs reported identity.

If this explanation holds, then your model, used to refute the De Raedt team's modeling attempts, is physically valid in interference effects, but fails to fully refute them.
http://msc.phys.rug.nl/pdf/athens06-deraedt1.pdf
The physical interpretation of the negative probability, defined from the possibly valid explanation given, is actually a positive possibility that the interrogator C will interrogate Bob first, before B gets to him. Thus if you assign this probability to interrogator B instead of C, which actually intercepted Bob, it takes a negative value.

This means my original assumption when you challenged me on negative probabilities, before I read your sites page on negative probabilities, wasn't as far off as I thought. As I stated then, the negative probability results from a case instance E(a,b) of a possibility derived from probability P(A,B|a,b). Thus not technically a probability in the strict sense. As I noted then, this only occurs "when detections are more likely in only one of the detectors, rather than neither or both". Such as when interrogator C gets to Bob before interrogator B does, producing a detection at C and missing at B. Which a full, and possibly valid, explanation was given above.

QM:
Technically QM doesn't confirm nor deny counterfactual reasoning. It merely calcs for whatever situation you 'actually' provide it. The counterfactual conflict only comes in after the fact, when you compare two cases you 'actually' provided. The fact that QM is not explicitly time dependent makes counterfactual reasoning even more difficult to interpret. If any time dependent phenomena are involved, it must be counterfactually interpreted as something that occurred between an event and a measurement, for which we have no measurements to empirically define, except after the fact when the measurement is performed.
 
Last edited by a moderator:
  • #486
ajw1 said:
For those acquainted with c-sharp, I have the same de Raedt simulation, but converted in an object oriented way (this allows a clear separation between the objects(particles and filters) used in the simulation).

OOP is cool, my favorite is Delphi/Object Pascal, which is very similar to C# (Anders Hejlsberg was/are chief architect for both).

Maybe I’ll check https://www.physicsforums.com/showpost.php?p=2728427&postcount=464". They are claiming to prove deterministic EPR–Bohm & NLHVT, stating this:
Thus, these numbers are no “random variables” in the strict mathematical sense. Probability theory has nothing useful to say about the deterministic sequence of these numbers. In fact, it does not even contain, nor provides a procedure to generate random variables.

(!?) Funny approach... when the probabilistic nature of QM is the very foundation of Bell’s work...?? It’s like proving that Schrödinger's cat can’t run, by cutting off the legs?:bugeye:?
(And true random numbers from atmospheric noise is available for free at random.org :wink:)

And what is this "time window"?? The code is executed sequentially, not multithreaded or parallel! And then multiply the "measurement" with a (pseudo-)random number to "check" if the "measurement" is inside this "time window"!? ... jeeess, I wouldn’t call this a "simulation"... more like an "imitation".

And De Raedt has another gigantic problem with his 'proof' of the non-local hidden variable theory:
http://arxiv.org/abs/0704.2529"
(Anton Zeilinger et.al)

Here we show by both theory and experiment that a broad and rather reasonable class of such non-local realistic theories is incompatible with experimentally observable quantum correlations. In the experiment, we measure previously untested correlations between two entangled photons, and show that these correlations violate an inequality proposed by Leggett for non-local realistic theories.


The best statement in the de Raedt article is this:
In the absence of a theory that describes the individual events, the very successful computational-physics approach “start from the theory and invent/use a simulation algorithm” cannot be applied to this problem.

Which lead to the next...

(I still admire all the work that you and DrC have put into this.)

ajw1 said:
But an open framework should probably be started in something like http://maxima.sourceforge.net/" .

The more I think about an "EPR framework" I realize it’s probably not a splendid idea, as De Raedt says – we don’t have a theory that describes the individual events. We don’t know what really happens!

So it’s going to be very hard, if not impossible, to produce an 'all-purpose' framework, that could be used for testing new ideas. All we can do is what De Raedt has done – to mimic already performed experiments.

I think...

If you think I’m wrong, there’s another nice alternative to Maxima in http://en.wikipedia.org/wiki/FreeMat" ) which has an interface to external C, C++, and Fortran code (+ loading dll’s).

Cheers!
 
Last edited by a moderator:
  • #487
DrChinese said:
I have cos^2(22.5) as 85.36%, although I don't think the value matters for your example. I think you are calculating cos^2 - sin^2 - matches less non-matches - to get your rate, which yields a range of +1 to -1. I always calc based on matches, yielding a range from 0 to 1. Both are correct.

:biggrin:


You bet! Because I’ve got my value from a public lecture by Alain Aspect at the Perimeter Institute for Theoretical Physics, talking about Bell's theorem! :smile:

To avoid that this thread soon get’s a subtitle – "The noble art of not answering simple questions" – I’m going to act proactive. :biggrin:

This is wrong:
DrChinese said:
We can perform the test on Alice, and use that result to predict Bob. If we can predict Bob with certainty, without changing Bob in any way prior to Bob's observation, then the Bob result is "real". Bell real.


Bell's theorem is all about statistical QM probability (except for 0° and 90° which also LHV handles perfect).
 
  • #488
DevilsAvocado said:
(!?) Funny approach... when the probabilistic nature of QM is the very foundation of Bell’s work...?? It’s like proving that Schrödinger's cat can’t run, by cutting off the legs?:bugeye:?
(And true random numbers from atmospheric noise is available for free at random.org :wink:)
It is very common to use pseudo random numbers in these kind of simulations, and often not worth the effort to get real random values. I don't think this is really an issue, provided that your pseudo random generator is ok for the purpose.
DevilsAvocado said:
And what is this "time window"?? The code is executed sequentially, not multithreaded or parallel! And then multiply the "measurement" with a (pseudo-)random number to "check" if the "measurement" is inside this "time window"!? ... jeeess, I wouldn’t call this a "simulation"... more like an "imitation".
De Raedt is not proposing a hidden variable theory, he says he can obtain the results of real Bell type experiments in a local realistic way.
So in real experiments one has to use a time frame for determining whether to clicks at two detectors belong to each other or not.
There are indications that particles are delayed by the angle of the filter. This delay time is used by de Raedt, and he obtains the exact QM prediction for this setup (well, similar results as the real experiment, which more or less follows the QM prediction).

DevilsAvocado said:
The more I think about an "EPR framework" I realize it’s probably not a splendid idea, as De Raedt says – we don’t have a theory that describes the individual events. We don’t know what really happens!

So it’s going to be very hard, if not impossible, to produce an 'all-purpose' framework, that could be used for testing new ideas. All we can do is what De Raedt has done – to mimic already performed experiments.

I think...

If you think I’m wrong, there’s another nice alternative to Maxima in http://en.wikipedia.org/wiki/FreeMat" ) which has an interface to external C, C++, and Fortran code (+ loading dll’s).

Cheers!
I don't know about 'all-purpose'. It seems to me that a De Raedt like simulation structure should be able to obtain the datasets DrChinese often mentions, for all kinds of new LR ideas.
 
Last edited by a moderator:
  • #489
DevilsAvocado said:
This is wrong:

[something DrChinese says...]

Bell's theorem is all about statistical QM probability (except for 0° and 90° which also LHV handles perfect).

Ha!

But we are talking about 2 different things. Yes, Bell is about the statistical predictions of QM vs. Local Realism. But both EPR and Bell use the idea of the "elements of reality" (defined as I have) as a basis for their analysis.

Score: Avocado 1, DrC 1.
 
  • #490
ajw1 said:
1. De Raedt is not proposing a hidden variable theory, he says he can obtain the results of real Bell type experiments in a local realistic way.

So in real experiments one has to use a time frame for determining whether to clicks at two detectors belong to each other or not.

There are indications that particles are delayed by the angle of the filter. This delay time is used by de Raedt, and he obtains the exact QM prediction for this setup (well, similar results as the real experiment, which more or less follows the QM prediction).

2. I don't know about 'all-purpose'. It seems to me that a De Raedt like simulation structure should be able to obtain the datasets DrChinese often mentions, for all kinds of new LR ideas.

1. The delay issue is complicated, but the bottom line is this is a testable hypothesis. I know of several people who are investigating this by looking at the underlying data using a variety of analysis techniques. I too am doing some work in this particular area (my expertise is in the data processing side). At this time, there is no evidence at all for anything which might lead to the bias De Raedt et al propose. But there is some evidence of delay on the order of a few ns. This is far too small to account for pairing problems.

2. Yes, it is true that the De Raedt simulation exploits the so-called "fair sampling assumption" (the time window) to provide a dataset which is realistic. The recap on this is:

a) The full universe obeys the Bell Inequality, and therefore does not follow Malus.
b) The sample violates the Bell Inequality and is close to the QM predictions.
c) The model is falsified for entangled photons which are not polarization entangled.
 
  • #491
DrChinese said:
Score: Avocado 1, DrC 1.

Okay, I give up, you are right (as always)... :redface:

Score: (Smashed)Avocado ≈1, DrC >1.

:smile:
 
  • #492
DrChinese said:
But there is some evidence of delay on the order of a few ns. This is far too small to account for pairing problems.

It is not my intention to discuss the De Raedt model here intensively, but the time window used in the simulation and in the real experiment seems to be in the order of a few nano seconds, so in the same range as the evidence you mention (I haven't seen the articles with this evidence yet). Or am I misreading your statement?

But more important to this thread I think is that when his time tag calculation is set off, the event by event simulation can be used to test other LR theories.
 
  • #493
ajw1 said:
It is very common to use pseudo random numbers in these kind of simulations, and often not worth the effort to get real random values. I don't think this is really an issue, provided that your pseudo random generator is ok for the purpose.

Okay, you have spent at lot more time on this than me. At the same time this is interesting, as pseudo-random numbers are deterministic in the sense that if we know the seed, we can calculate the 'future' in a deterministic way. To my understanding, this is exactly what LHV does, right?

Conclusion: If we can make a computer version of EPR/BTE to produce the correct statistics with pseudo-random numbers, we have then automatically proved that (N)LHVT is correct!

ajw1 said:
So in real experiments one has to use a time frame for determining whether to clicks at two detectors belong to each other or not.
There are indications that particles are delayed by the angle of the filter. This delay time is used by de Raedt, and he obtains the exact QM prediction for this setup (well, similar results as the real experiment, which more or less follows the QM prediction).

Yes I know, http://en.wikipedia.org/wiki/Coincidence_counting_(physics)" in real experiment, where detections must be sorted into time bins. This is a interesting problem because, as we all know, there are no noise or disturbances in a sequential executed code without bugs, and there is no problem to get a 100% detection (unless we do a BASIC GOTO SpaghettiMessUpRoutine() :smile:):
Start
...
Detection1
...
Detection2
...
EvaluateDetections
...
End​
(= almost impossible to fail)

So what does de Raedt do? He implements the 'weakness' of real experiments, and that’s maybe okay. What I find 'peculiar' is how pseudo-random numbers * measurement, has anything to do with real time bins and Coincidence counting... I don’t get it...

ajw1 said:
I don't know about 'all-purpose'. It seems to me that a De Raedt like simulation structure should be able to obtain the datasets DrChinese often mentions, for all kinds of new LR ideas.

Okay, if you say so.
 
Last edited by a moderator:
  • #494
DrChinese said:
1. The delay issue is complicated, but the bottom line is this is a testable hypothesis. I know of several people who are investigating this by looking at the underlying data using a variety of analysis techniques. I too am doing some work in this particular area (my expertise is in the data processing side). At this time, there is no evidence at all for anything which might lead to the bias De Raedt et al propose. But there is some evidence of delay on the order of a few ns. This is far too small to account for pairing problems.

DrC, I haven’t had the time to test/modify your code (need new/bigger HDD to install Visual Studio), but what happens if you completely skip "Coincidence counting" in the code??

(To me it seems very strange to build a whole scientific theory on noise and the 'troubles' of real measurements... :confused:)


EDIT: Ahh! I see ajw1 just answered the question...
 
Last edited:
  • #495
ajw1 said:
... but the time window used in the simulation and in the real experiment seems to be in the order of a few nano seconds ...

How can you convert "a few nano seconds" to this code?

6oztpt.png
 
  • #496
DevilsAvocado said:
How can you convert "a few nano seconds" to this code?

6oztpt.png

The remark is base on http://rugth30.phys.rug.nl/pdf/shu5.pdf" , see for example page 8.
 
Last edited by a moderator:
  • #497
ajw1 said:
The remark is base on http://rugth30.phys.rug.nl/pdf/shu5.pdf" , see for example page 8.

Okay, thanks. It could most certainly be my lack of knowledge in polarizer’s and physics, that makes me see this as "strange", but I can’t help it – how can anyone derive "a few nano seconds" from this? It’s just a mystery to me? There are no clocks or timing in the sequential code, just Pi, Cos and Radians?
5.4 Time Delay

In our model, the time delay tn,i for a particle is assumed to be distributed uniformly over the interval [t0, t0 + T ]. In practice, we use uniform pseudo-random numbers to generate tn,i . As in the case of the angles ξn, the random choice of tn,i is merely convenient, not essential. From (2), it follows that only differences of time delays matter. Hence, we may put t0 = 0. The time-tag for the event n is then tn,i ∈ [0,T ]. There are not many reasonable options to choose the functional dependence of T . Assuming that the particle “knows” its own direction and that of the polarizer only, T should be a function of the relative angle only. Furthermore, consistency with classical electrodynamics requires that functions that depend on the polarization have period π [27]. Thus, we must have T (ξn − θ1) = F((Sn,1 • a)2) and, similarly, T (ξn − θ2) = F((Sn,2 • b)2), where b = (cos β, sin β). We found that T (x) = T0|sin 2x|d yields the desired results [15]. Here, T0 = maxθ T (θ) is the maximum time delay and defines the unit of time, used in the simulation. In our numerical work, we set T0 = 1.


To me, this looks like "trial & error", but I could be catastrophically wrong...
 
Last edited by a moderator:
  • #498
You simulation guys might be interested in this paper Corpuscular model of two-beam interference and double-slit experiments with single photons

where they demonstrate single particle interference with a computer model which models the particles as "information carriers" exchanging information with the experimental apparatus. No wave function or non-local effects are assumed.

I like the idea that particles might exchange protocols like packets in a wifi network, but it seems a bit unlikely :smile:
 
  • #499
Thanks UN, I have to leave for shorter break, but I’ll check the link later!
 
  • #500
ajw1 said:
It is not my intention to discuss the De Raedt model here intensively, but the time window used in the simulation and in the real experiment seems to be in the order of a few nano seconds, so in the same range as the evidence you mention (I haven't seen the articles with this evidence yet). Or am I misreading your statement?

As I say, it is a bit complicated. Keep in mind that the relevant issue is whether the delay is MORE for one channel or not. In other words, similar delays on both sides have little effect. I use the setup of Weihs et al as my "golden standard".

Violation of Bell's inequality under strict Einstein locality conditions, Gregor Weihs, Thomas Jennewein, Christoph Simon, Harald Weinfurter, Anton Zeilinger (Submitted on 26 Oct 1998)
http://arxiv.org/abs/quant-ph/9810080

As to the size of the window itself: Weihs uses 6 ns for their experiment. As there are about 10,000 detections per second, the average separation between clicks might be on the order of 25,000 ns. The De Raedt simulation can be modified for the size you like obviously.

It follows that if you altered the window size and got a different result, that would be significant. But with a large time difference between most events, I mean, seriously, what do you expect to see here? ALL THE CLICKS ARE TAGGED! It's not like they were thrown away.

When I finish my analysis of the data (which is a ways off), I will report on anything I think is of interest. In the meantime, I might suggest the following article if you want to learn more from someone who has studied this extensively:

http://arxiv.org/abs/0801.1776

Violation of Bell inequalities through the coincidence-time loophole, Peter Morgan, (11 Jan 2008)

"The coincidence-time loophole was identified by Larsson & Gill (Europhys. Lett. 67, 707 (2004)); a concrete model that exploits this loophole has recently been described by De Raedt et al. (Found. Phys., to appear). It is emphasized here that De Raedt et al.'s model is experimentally testable. De Raedt et al.'s model also introduces contextuality in a novel and classically more natural way than the use of contextual particle properties, by introducing a probabilistic model of a limited set of degrees of freedom of the measurement apparatus, so that it can also be seen as a random field model. Even though De Raedt et al.'s model may well contradict detailed Physics, it nonetheless provides a way to simulate the logical operation of elements of a quantum computer, and may provide a way forward for more detailed random field models."

Peter has been designing theoretical models for a number of years, with an emphasis on those with local random fields. I don't consider him a local realist (although I am not sure how he labels himself) because he respects Bell.
 
  • #501
DevilsAvocado said:
So what does de Raedt do? He implements the 'weakness' of real experiments, and that’s maybe okay. What I find 'peculiar' is how pseudo-random numbers * measurement, has anything to do with real time bins and Coincidence counting... I don’t get it...

I don't think that would be a fair characterization of the de Raedt model. First, it is really a pure simulation. At least, that is how I classify it. I do not consider it a candidate theory. The "physics" (such as the time window stuff) is simply a very loose justification for the model. I accept it on face as an exercise.

The pseudo-random numbers have no effect at all (at least to my eyes). You could re-seed or not all you want, it should make no difference to the conclusion.

The important thing - to me - is the initial assumptions. If you accept them, you should be able to get the desired results. You do. Unfortunately, you also get undesired results and these are going to be present in ANY simulation model as well. It is as if you say: All men are Texans, and then I show you some men who are not Texans. Clearly, counterexamples invalidate the model.
 
  • #502
DevilsAvocado said:
To me, this looks like "trial & error", but I could be catastrophically wrong...

I would guess that they did a lot of trial and error to come up with their simulations. It had to be reverse engineered. I have said many times that for these ideas to work, there must be a bias function which is + sometimes and - others. So one would start from that. Once I know the shape of the function (which is cyclic), I would work on the periodicity.
 
  • #503
Thanks for the clarification DrC.
 
  • #504
unusualname said:
... No wave function or non-local effects are assumed.

I like the idea that particles might exchange protocols like packets in a wifi network, but it seems a bit unlikely :smile:

Yeah! I also like this approach.
In our simulation approach, we view each photon as a messenger carrying a message. Each messenger has its own internal clock, the hand of which rotates with frequency f. As the messenger travels from one position in space to another, the clock encodes the time-of-flight t modulo the period 1/f. The message, the position of the clock’s hand, is most conveniently represented by a two-dimensional unit vector ...


Thru this I found 3 other small simulations (with minimal code) for Mathematica which relate to EPR:

"[URL
Bell's Theorem[/URL]

"[URL
Generating Entangled Qubits[/URL]

"[URL
Retrocausality: A Toy Model[/URL]

(All include a small web preview + code)
 

Attachments

  • thumbnail.jpg
    thumbnail.jpg
    2.4 KB · Views: 333
  • thumbnail.jpg
    thumbnail.jpg
    1.6 KB · Views: 312
  • thumbnail.jpg
    thumbnail.jpg
    2.5 KB · Views: 349
Last edited by a moderator:
  • #505
And this:

"[URL
Event-by-Event Simulation of Double-Slit Experiments with Single Photons[/URL]
 

Attachments

  • thumbnail.jpg
    thumbnail.jpg
    1.8 KB · Views: 325
Last edited by a moderator:
  • #506
DrChinese said:
1. As I say, it is a bit complicated. Keep in mind that the relevant issue is whether the delay is MORE for one channel or not. In other words, similar delays on both sides have little effect.

2. I use the setup of Weihs et al as my "golden standard". Violation of Bell's inequality under strict Einstein locality conditions, Gregor Weihs, Thomas Jennewein, Christoph Simon, Harald Weinfurter, Anton Zeilinger (Submitted on 26 Oct 1998)
http://arxiv.org/abs/quant-ph/9810080

3. As to the size of the window itself: Weihs uses 6 ns for their experiment. As there are about 10,000 detections per second, the average separation between clicks might be on the order of 25,000 ns. The De Raedt simulation can be modified for the size you like obviously.

It follows that if you altered the window size and got a different result, that would be significant. But with a large time difference between most events, I mean, seriously, what do you expect to see here? ALL THE CLICKS ARE TAGGED! It's not like they were thrown away.

4. When I finish my analysis of the data (which is a ways off), I will report on anything I think is of interest.

1. I think the delay is only important when it depends on the angle of the filter. This relation can be equal on both sides.

2. De Raedt's work is based on the same article/data

3. All clicks are tagged, but not all clicks are used (that's why one uses a time window). It appears from the analysis of De Raedt of the data from Weihs et al. one needs to use a time window in the order of several ns to obtain QM like results, the optimum being near 4 ns. Either a larger time window or a smaller will yield worse results (and the reason for the latter is not because the dataset is getting too small for correct statistics).

4. You were able to obtain the raw data from Weihs et al.? I tried to find them, but I think they are no longer available on their site.
 
Last edited:
  • #507
ajw1 said:
1. I think the delay is only important when it depends on the angle of the filter. This relation can be equal on both sides.

2. De Raedt's work is based on the same article/data

3. All clicks are tagged, but not all clicks are used (that's why one uses a time window). It appears from the analysis of De Raedt of the data from Weihs et al. one needs to use a time window in the order of several ns to obtain QM like results, the optimum being near 4 ns. Either a larger time window or a smaller will yield worse results (and the reason for the latter is not because the dataset is getting too small for correct statistics).

4. You were able to obtain the raw data from Weihs et al.? I tried to find them, but I think they are no longer available on their site.

1. Keep in mind, the idea of some delay dependent on angle is purely hypothetical. There is no actual difference in the positions of the polarizers in the Weihs experiment anyway. It is fixed. To change angle settings:

"Each of the observers switched the direction of local
polarization analysis with a transverse electro-optic modulator.
It’s optic axes was set at 45◦ with respect to the
subsequent polarizer. Applying a voltage causes a rotation
of the polarization of light passing through the modulator
by a certain angle proportional to the voltage [13].
For the measurements the modulators were switched fast
between a rotation of 0◦ and 45◦."


2. Yup. Makes it nice when we can all agree upon a standard.


3. I think you missed my point. I believe Weihs would call attention to the fact that it agrees with QM for the 6 ns case but not the 12 ns case (or whatever). It would in fact be shocking if any element of QM was experimentally disproved, don't you think? As with any experiment, the team must make decisions on a variety of parameters. If anyone seriously thinks that there is something going on with the detection window, hey, all they have to do is conduct the experiment.


4. I couldn't find it publicly.
 
  • #508
DrChinese said:
... the idea of some delay dependent on angle is purely hypothetical ...

That’s a big relief! :approve:
 
  • #509
DrChinese said:
3. I think you missed my point. I believe Weihs would call attention to the fact that it agrees with QM for the 6 ns case but not the 12 ns case (or whatever). It would in fact be shocking if any element of QM was experimentally disproved, don't you think? As with any experiment, the team must make decisions on a variety of parameters. If anyone seriously thinks that there is something going on with the detection window, hey, all they have to do is conduct the experiment.
I was not suggesting any unfair playing by Weihs (re-reading my post I agree it looks a bit that way) :wink:. Furthermore as I said De Raedt has analysed the raw data from Weihs et al. and published the exact relation between the chosen time window and the results http://rugth30.phys.rug.nl/pdf/shu5.pdf" . But surely you must have read this article.
 
Last edited by a moderator:
  • #510
DrChinese said:
1. Pot calling the kettle...

2. You apparently don't follow Mermin closely. He is as far from a local realist as it gets.

Jaynes is a more complicated affair. His Bell conclusions are far off the mark and are not accepted.
Again you've missed the point. I'm guessing that you probably didn't bother to read the papers I referenced.

DrChinese said:
I am through discussing with you at this time. You haven't done your homework on any of the relevant issues and ignore my suggestions. I will continue to point out your flawed comments whenever I think a reader might actually mistake your commentary for standard physics.
You haven't been discussing any of the points I've brought up anyway. :smile: You have a mantra that you repeat.

Here's another question for you. Is it possible that maybe the issue is a little more subtle than your current understanding of it?

If you decide you want to answer the simple questions I've asked you or address the salient points that I've presented (rather than repeating your mantra), then maybe we can have an actual discussion. But when you refuse to even look at a paper, or answer a few simple questions about what it contains, then I find that suspicious to say the least.
 

Similar threads

  • · Replies 45 ·
2
Replies
45
Views
4K
  • · Replies 4 ·
Replies
4
Views
1K
Replies
20
Views
2K
Replies
3
Views
2K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 100 ·
4
Replies
100
Views
11K
  • · Replies 6 ·
Replies
6
Views
3K
Replies
11
Views
2K