Is action at a distance possible as envisaged by the EPR Paradox.

  • #501
DevilsAvocado said:
So what does de Raedt do? He implements the 'weakness' of real experiments, and that’s maybe okay. What I find 'peculiar' is how pseudo-random numbers * measurement, has anything to do with real time bins and Coincidence counting... I don’t get it...

I don't think that would be a fair characterization of the de Raedt model. First, it is really a pure simulation. At least, that is how I classify it. I do not consider it a candidate theory. The "physics" (such as the time window stuff) is simply a very loose justification for the model. I accept it on face as an exercise.

The pseudo-random numbers have no effect at all (at least to my eyes). You could re-seed or not all you want, it should make no difference to the conclusion.

The important thing - to me - is the initial assumptions. If you accept them, you should be able to get the desired results. You do. Unfortunately, you also get undesired results and these are going to be present in ANY simulation model as well. It is as if you say: All men are Texans, and then I show you some men who are not Texans. Clearly, counterexamples invalidate the model.
 
Physics news on Phys.org
  • #502
DevilsAvocado said:
To me, this looks like "trial & error", but I could be catastrophically wrong...

I would guess that they did a lot of trial and error to come up with their simulations. It had to be reverse engineered. I have said many times that for these ideas to work, there must be a bias function which is + sometimes and - others. So one would start from that. Once I know the shape of the function (which is cyclic), I would work on the periodicity.
 
  • #503
Thanks for the clarification DrC.
 
  • #504
unusualname said:
... No wave function or non-local effects are assumed.

I like the idea that particles might exchange protocols like packets in a wifi network, but it seems a bit unlikely :smile:

Yeah! I also like this approach.
In our simulation approach, we view each photon as a messenger carrying a message. Each messenger has its own internal clock, the hand of which rotates with frequency f. As the messenger travels from one position in space to another, the clock encodes the time-of-flight t modulo the period 1/f. The message, the position of the clock’s hand, is most conveniently represented by a two-dimensional unit vector ...


Thru this I found 3 other small simulations (with minimal code) for Mathematica which relate to EPR:

"[URL
Bell's Theorem[/URL]

"[URL
Generating Entangled Qubits[/URL]

"[URL
Retrocausality: A Toy Model[/URL]

(All include a small web preview + code)
 

Attachments

  • thumbnail.jpg
    thumbnail.jpg
    2.4 KB · Views: 325
  • thumbnail.jpg
    thumbnail.jpg
    1.6 KB · Views: 305
  • thumbnail.jpg
    thumbnail.jpg
    2.5 KB · Views: 343
Last edited by a moderator:
  • #505
And this:

"[URL
Event-by-Event Simulation of Double-Slit Experiments with Single Photons[/URL]
 

Attachments

  • thumbnail.jpg
    thumbnail.jpg
    1.8 KB · Views: 320
Last edited by a moderator:
  • #506
DrChinese said:
1. As I say, it is a bit complicated. Keep in mind that the relevant issue is whether the delay is MORE for one channel or not. In other words, similar delays on both sides have little effect.

2. I use the setup of Weihs et al as my "golden standard". Violation of Bell's inequality under strict Einstein locality conditions, Gregor Weihs, Thomas Jennewein, Christoph Simon, Harald Weinfurter, Anton Zeilinger (Submitted on 26 Oct 1998)
http://arxiv.org/abs/quant-ph/9810080

3. As to the size of the window itself: Weihs uses 6 ns for their experiment. As there are about 10,000 detections per second, the average separation between clicks might be on the order of 25,000 ns. The De Raedt simulation can be modified for the size you like obviously.

It follows that if you altered the window size and got a different result, that would be significant. But with a large time difference between most events, I mean, seriously, what do you expect to see here? ALL THE CLICKS ARE TAGGED! It's not like they were thrown away.

4. When I finish my analysis of the data (which is a ways off), I will report on anything I think is of interest.

1. I think the delay is only important when it depends on the angle of the filter. This relation can be equal on both sides.

2. De Raedt's work is based on the same article/data

3. All clicks are tagged, but not all clicks are used (that's why one uses a time window). It appears from the analysis of De Raedt of the data from Weihs et al. one needs to use a time window in the order of several ns to obtain QM like results, the optimum being near 4 ns. Either a larger time window or a smaller will yield worse results (and the reason for the latter is not because the dataset is getting too small for correct statistics).

4. You were able to obtain the raw data from Weihs et al.? I tried to find them, but I think they are no longer available on their site.
 
Last edited:
  • #507
ajw1 said:
1. I think the delay is only important when it depends on the angle of the filter. This relation can be equal on both sides.

2. De Raedt's work is based on the same article/data

3. All clicks are tagged, but not all clicks are used (that's why one uses a time window). It appears from the analysis of De Raedt of the data from Weihs et al. one needs to use a time window in the order of several ns to obtain QM like results, the optimum being near 4 ns. Either a larger time window or a smaller will yield worse results (and the reason for the latter is not because the dataset is getting too small for correct statistics).

4. You were able to obtain the raw data from Weihs et al.? I tried to find them, but I think they are no longer available on their site.

1. Keep in mind, the idea of some delay dependent on angle is purely hypothetical. There is no actual difference in the positions of the polarizers in the Weihs experiment anyway. It is fixed. To change angle settings:

"Each of the observers switched the direction of local
polarization analysis with a transverse electro-optic modulator.
It’s optic axes was set at 45◦ with respect to the
subsequent polarizer. Applying a voltage causes a rotation
of the polarization of light passing through the modulator
by a certain angle proportional to the voltage [13].
For the measurements the modulators were switched fast
between a rotation of 0◦ and 45◦."


2. Yup. Makes it nice when we can all agree upon a standard.


3. I think you missed my point. I believe Weihs would call attention to the fact that it agrees with QM for the 6 ns case but not the 12 ns case (or whatever). It would in fact be shocking if any element of QM was experimentally disproved, don't you think? As with any experiment, the team must make decisions on a variety of parameters. If anyone seriously thinks that there is something going on with the detection window, hey, all they have to do is conduct the experiment.


4. I couldn't find it publicly.
 
  • #508
DrChinese said:
... the idea of some delay dependent on angle is purely hypothetical ...

That’s a big relief! :approve:
 
  • #509
DrChinese said:
3. I think you missed my point. I believe Weihs would call attention to the fact that it agrees with QM for the 6 ns case but not the 12 ns case (or whatever). It would in fact be shocking if any element of QM was experimentally disproved, don't you think? As with any experiment, the team must make decisions on a variety of parameters. If anyone seriously thinks that there is something going on with the detection window, hey, all they have to do is conduct the experiment.
I was not suggesting any unfair playing by Weihs (re-reading my post I agree it looks a bit that way) :wink:. Furthermore as I said De Raedt has analysed the raw data from Weihs et al. and published the exact relation between the chosen time window and the results http://rugth30.phys.rug.nl/pdf/shu5.pdf" . But surely you must have read this article.
 
Last edited by a moderator:
  • #510
DrChinese said:
1. Pot calling the kettle...

2. You apparently don't follow Mermin closely. He is as far from a local realist as it gets.

Jaynes is a more complicated affair. His Bell conclusions are far off the mark and are not accepted.
Again you've missed the point. I'm guessing that you probably didn't bother to read the papers I referenced.

DrChinese said:
I am through discussing with you at this time. You haven't done your homework on any of the relevant issues and ignore my suggestions. I will continue to point out your flawed comments whenever I think a reader might actually mistake your commentary for standard physics.
You haven't been discussing any of the points I've brought up anyway. :smile: You have a mantra that you repeat.

Here's another question for you. Is it possible that maybe the issue is a little more subtle than your current understanding of it?

If you decide you want to answer the simple questions I've asked you or address the salient points that I've presented (rather than repeating your mantra), then maybe we can have an actual discussion. But when you refuse to even look at a paper, or answer a few simple questions about what it contains, then I find that suspicious to say the least.
 
  • #511
DrChinese said:
That's bull. I am shocked you would assert this. Have you not been listening to anything about Bell? You sound like someone from 1935.
I take it then you do not understand the meaning of "correlation".

EDIT:
There are no global correlations. And on top of my prior post, I would like to mention that a Nobel likely awaits any iota of proof of your statement. Harmonic signals are correlated in some frames, but not in all.
You contradict yourself by finally admitting that in fact all harmonic signals are correlated. The fact that it is possible to screen-off correlations in some frames does not eliminate the fact that there exists a frame in which a correlation exists. In reverse, just because it is possible to find a frame in which a correlation is screened-off does not imply that the correlation does not exist.

In any case, my original point which I believe still stands is the fact that two entities can be correlated even if they have never been in the same space-time area. It is trivial to understand that two systems governed by the same physical laws will be correlated whether or not they have been in the same space-time area or not.

I could go a step further and claim that every photon is correlated with every other photon just due to the fact that they are governed by the same physical laws, but I wouldn't as it is fodder for a different thread. ;-)
 
Last edited:
  • #512
... Houston, we've have a problem with the FTL mechanism ...

The EPR paradox seems to be a bigger problem than one might guess at first sight. Bell's theorem has ruled out local hidden variables (LHV), both theoretically and practically by numerous Bell test experiments, all violating Bell inequalities.

To be more precise: Bell inequalities, LHV and Local realism are more or less the same thing, stating – there is an underlying preexisting reality in the microscopic QM world, and no object can influence another object faster than the speed of light (in vacuum).

There are other theories trying to explain the EPR paradox, like the non-local hidden variables theory (NLHV). But as far as I can tell, this has lately also been ruled out experimentally by Anton Zeilinger et.al.

Then we have other interpretations of QM, like Many Worlds Interpretation (MWI), Relational Blockworld (RBW), etc. Many of these interpretations have the 'disadvantage' of introducing a mechanisms that, too many, are more 'astonishing' than the EPR paradox itself, and thereby a contradiction to Occam's razor – "entities must not be multiplied beyond necessity".

Even if it seems like "Sci Fi", (the last and) the most 'plausible' solution to the EPR paradox seems to be some 'mechanism' operating faster than the speed of light between Alice & Bob. As DrChinese expresses (my emphasis):
DrChinese said:
... Because I accept Bell, I know the world is either non-local or contextual (or both). If it is non-local, then there can be communication at a distance between Alice and Bob. When Alice is measured, she sends a message to Bob indicating the nature of the measurement, and Bob changes appropriately. Or something like that, the point is if non-local action is possible then we can build a mechanism presumably which explains entanglement results.


If we look closer at this claim, we will see that even the "FTL mechanism" creates another unsolvable paradox.

John Bell used Probability theory to prove that statistical QM probabilities differ from LHV. Bell's theorem thus proves that true randomness is a fundamental part of nature:
"No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics."

Now, what happens if we solve the EPR paradox with the "FTL mechanism"? Well, as DrC says, Alice sends a message to Bob to inform him about her angle and result, and what Bob needs to change appropriately.

Does this look like a fundamental and true randomness of the QM nature?

To me it doesn’t. Even if FTL is involved, there is a cause for Alice to send a message to Bob, and Bob will have a cause for his changes!?

This doesn’t make sense. This is a contradiction to the true randomness of QM, which Bell's theorem is proving correct!?

Any thoughts, anyone?
 
Last edited:
  • #513
Since https://www.physicsforums.com/showpost.php?p=2729969&postcount=485".

As I've previously noted, it's not a "probability" being described as negative, it's possible case instance E(a,b) of a probability P(A,B|a,b). To explain the different between a "possible case instance" and a "probability", consider a coin toss. The "probability" of a heads or tails is 50% each. A "possible case instance" will be either a heads or a tails, but no "probability" is involved after the coin toss and we know which side the coin landed on. What being compared is a large group of deterministically selected case instances.

Thus saying that case instances where the coin did in fact land on tails negatively interferes with heads is true, but makes no sense in terms of "probabilities". It's a case instance, not a probability. By itself this doesn't fully describe the negative [strike]probabilities[/strike] possibilities described on the "negative probabilities" page, because there is still too many negative possibilities to account for in coin tosses.

As is well known, in the derivation of Bell's inequalties, negative possibilities only occur in case 'instances' where detections are more likely in only one of the detectors, rather than neither or both. So far exactly as you would expect in coin tosses. To understand where the extra coin tosses c0me from we need to look at a definition.
1) Bell realism - An element of reality is defined by a predictable value of a measurement.

Have have 2 measuring instruments A and B, which each consist of a polarizer and a photon detector. Each measure is considered an element of reality, per Bell realism, such that a measure by each of our 2 measuring instruments constitutes 2 elements of reality. Now we are going to emit a single photon at our detectors. Only detector A has a particular polarization setting, and detector B is not another detector, but another setting we could have chosen for detector A, i.e., a counterfactual measurement.

Now, by definition we are looking for 2 elements of reality, i.e., predictable measures per Bell realism. Yet if A detects our single photon, we know B can't, and visa versa. But if counterfactually both A and B was in principle capable of separately detecting that one photon, we are allowed to presume that only sometimes did A and B both see the photon (since we can call it both ways counterfactually), and sometimes not. So if that counterfactual measure can sometimes see the same photon we are required to call that a separate element of reality per Bell Realism, even though it's the same photon. Yet that requires us to also call the times A detected the photon but B didn't 2 separate elements of reality also.

If we call it the other way, and call both measurements the same element of reality per photon, it makes sense in those case where one detector detects the photon, but not the other. But violates Bell realism in cases where both detectors were capable of detecting that same photon. The negative possibility page presumes each measurement represents it's own distinct element of reality, which makes sense in those cases where both A and B could have detected the same photon. Thus, in those cases where our single photon can't counterfactually be detected by both detectors, it appears as if reality itself has been negatively interfered with.

Objections:
But we are talking statistics of a large number of photons, not a single photon. The negative probabilities are of a large number of detections.

True, but by academic definition, the large number of cases where derived from the special cases E(a,b) of the general probability P(A,B|a,b). It's tantamount to flipping a coin many times, taking all the cases where E(a,b)=tails, and calling that a probability because we are dealing with many cases of tails, rather than just one.

This argument is contingent upon a single assumption, that a single photon can 'sometimes' be 'counterfactually' detected by the same detector with a different polarization setting. I empirically justify this by the following empirical facts:
1) A polarizer will pass 50% of all 'randomly' polarized light.
2) A polarizer set at a 45 degree angle to a polarized beam of light will pass 50% of the light in that beam.

Now this is perfectly well described in QM and HUP, and this uncertainty is a LOCAL property of the individual photon itself. In QM, polarization is also perfectly well described as a quantum bit, where it can have values between 0 and 1. It is these partial values between 0 and 1 that allows the same photon to 'sometimes' be counterfactually detected with multiple polarizer settings. Yet this bit range is still a LOCAL property of the bit/photon.

We only have to accept the reality of HUP as a real property of the LOCAL photon polarization bit to get violations of Bell realism (a distinct issue from correlations). Yet the fact that correlations exist at all, and anti-twins (anti-correlated particles) can repeat the same response to polarizers deterministically, even with offsets in the 0/1 bits, indicates that as real as HUP is, it doesn't appear to be fundamental. So in this conception we have real LOCAL bit value ranges via HUP, legitimizing the QM coincidence predictions, with correlations that indicate HUP is valid, but not fundamental. The LOCAL validity of HUP is enough to break Bell's inequalities. While the breaking of Bell realism itself, due to LOCAL HUP, breaks the negative "possibility" proof.

The one to one correspondence between an element of reality (photon) and a detection is broken (Bell realism), when counterfactually a different detector setting can sometimes detect the same photon, and sometimes not. It does not explicitly break realism wrt the reality and locality of the photon itself. Detector and counterfactual detector is, after all, effectively in the same place.
 
Last edited by a moderator:
  • #514
ajw1 said:
I was not suggesting any unfair playing by Weihs (re-reading my post I agree it looks a bit that way) :wink:. Furthermore as I said De Raedt has analysed the raw data from Weihs et al. and published the exact relation between the chosen time window and the results http://rugth30.phys.rug.nl/pdf/shu5.pdf" . But surely you must have read this article.

Sure. And I consider it reasonable for them to make the argument that a change in time window causes some degradation of the results, although not enough to bring into the realistic realm. This is a good justification for their algorithm then, because theirs does not perfectly model the QM cos^2 relationship. But it does come sort of close and it does violate a Bell Inequality (as it should for their purposes) while providing a full universe which does not. Again, as a simulation, I think their ideas are OK to that point. My issue comes at a different step.
 
Last edited by a moderator:
  • #515
DevilsAvocado said:
...Now, what happens if we solve the EPR paradox with the "FTL mechanism"? Well, as DrC says, Alice sends a message to Bob to inform him about her angle and result, and what Bob needs to change appropriately.

Does this look like a fundamental and true randomness of the QM nature?

To me it doesn’t. Even if FTL is involved, there is a cause for Alice to send a message to Bob, and Bob will have a cause for his changes!?

This doesn’t make sense. This is a contradiction to the true randomness of QM, which Bell's theorem is proving correct!?

Any thoughts, anyone?

I would tend to agree. FTL seems to fill in the cause. As I understand the Bohmian program, it is ultimately deterministic. Randomness results from stochastic elements.
 
  • #516
billschnieder said:
1. You contradict yourself by finally admitting that in fact all harmonic signals are correlated.

2. I could go a step further and claim that every photon is correlated with every other photon just due to the fact that they are governed by the same physical laws, but I wouldn't as it is fodder for a different thread. ;-)

1. I never said anything of the kind. Some synchronization is possible in some frames. Entangled particles are entangled in all frames as far as I know.

2. Maybe they are. That would be a global parameter. c certainly qualifies in that respect. Beyond that, exactly what are you proposing?
 
  • #517
my_wan said:
As is well known, in the derivation of Bell's inequalties, negative possibilities only occur in case 'instances' where detections are more likely in only one of the detectors, rather than neither or both. ...

Have have 2 measuring instruments A and B, which each consist of a polarizer and a photon detector. Each measure is considered an element of reality, per Bell realism, such that a measure by each of our 2 measuring instruments constitutes 2 elements of reality. Now we are going to emit a single photon at our detectors. Only detector A has a particular polarization setting, and detector B is not another detector, but another setting we could have chosen for detector A, i.e., a counterfactual measurement.

Now, by definition we are looking for 2 elements of reality, i.e., predictable measures per Bell realism. Yet if A detects our single photon, we know B can't, and visa versa.

...


If we call it the other way, and call both measurements the same element of reality per photon, it makes sense in those case where one detector detects the photon, but not the other. But violates Bell realism in cases where both detectors were capable of detecting that same photon. The negative possibility page presumes each measurement represents it's own distinct element of reality, which makes sense in those cases where both A and B could have detected the same photon. Thus, in those cases where our single photon can't counterfactually be detected by both detectors, it appears as if reality itself has been negatively interfered with.

Objections:
But we are talking statistics of a large number of photons, not a single photon. The negative probabilities are of a large number of detections.

True, but by academic definition, the large number of cases where derived from the special cases E(a,b) of the general probability P(A,B|a,b). It's tantamount to flipping a coin many times, taking all the cases where E(a,b)=tails, and calling that a probability because we are dealing with many cases of tails, rather than just one.

This argument is contingent upon a single assumption, that a single photon can 'sometimes' be 'counterfactually' detected by the same detector with a different polarization setting. .

OK, I am still calling you out on your comments about a photon not being able to be detected at more than 1 angle. Show me a single photon - anywhere anytime - that cannot be detected by a polarizing beam splitter. Your assertion is simply incorrect! (Yes, in an ordinary PBS there is some inefficiency so 100% will not get through, but this is not what you are referring to.)

Further, the Bell program is to look for at least 3 elements of reality, not 2. The EPR program was 2.
 
  • #518
billschnieder said:
In any case, my original point which I believe still stands is the fact that two entities can be correlated even if they have never been in the same space-time area. It is trivial to understand that two systems governed by the same physical laws will be correlated whether or not they have been in the same space-time area or not.

Oh really? Trivial, eh? You really like to box yourself in. Well cowboy, show me something like this that violates Bell inequalities. I mean, other than entangled particles that have never been in each others light cones. LOL.

You see, it is true that you can correlate some things in simple ways. For example, you could create spatially separated Alice and Bob that have H> polarization. OK. What do you have? Not much. But that really isn't what we are discussing is it? Those photons are polarization correlated in a single basis only. Not so entangled photons, which are correlated in ALL bases. So sure, we all know about Bertlmann's socks but this is not what we are discussing in this thread.
 
  • #519
DrChinese said:
OK, I am still calling you out on your comments about a photon not being able to be detected at more than 1 angle. Show me a single photon - anywhere anytime - that cannot be detected by a polarizing beam splitter. Your assertion is simply incorrect! (Yes, in an ordinary PBS there is some inefficiency so 100% will not get through, but this is not what you are referring to.)

Further, the Bell program is to look for at least 3 elements of reality, not 2. The EPR program was 2.
Sentence 1): "OK, I am still calling you out on your comments about a photon not being able to be detected at more than 1 angle."

My argument is contingent upon the assumption that single photons can (counterfactually) be detected at more than 1 angle.

Sentence 2): "Show me a single photon - anywhere anytime - that cannot be detected by a polarizing beam splitter."

Simple enough. I'll do it for a whole beam of photons. Simply polarize the beam to a particular polarization, and turn a polarizer to 90 degrees of that beam. Effective none of the photons will get through the polarizer to a detector. Not sure why you specified a "beam splitter" here, as I'm only talking about how a photon responds to a polarizer at the end of it's trip. When final detection takes place for later coincidence comparisons. But it doesn't make a lot of difference.

Just because a quantum bit has effective values between 0 and 1 doesn't entail an equal likelihood of a measurement producing a 0 or a 1 in all cases.

Sentence 3): "Your assertion is simply incorrect!"
Suspect, given that sentence 1) indicates I claimed against what I claimed on reasonable empirical grounds.

Sentence 3): (Yes, in an ordinary PBS there is some inefficiency so 100% will not get through, but this is not what you are referring to.)
True, not what I was referring to. As a matter of fact I'm quiet happy to assume 100% efficiency for practical purposes, even if not strictly valid. Nor does my argument include the PBS, only the polarizers at the distant detection points, at the time of final detection but before the coincidence counts takes place. The one that's paired with the photon detector.
 
  • #520
Oops, I left out sentence 4): "Further, the Bell program is to look for at least 3 elements of reality, not 2. The EPR program was 2."

Yes, and it is this 3rd "elements of reality" that I am saying is sometimes a distinct "elements of reality", when the photons are unique, and sometimes not, when counterfactually the same photon would have been detected by both the 2nd and counterfactual 3rd so called "element of reality" (detector).

What this would mean is that the negative probability you calculated is the percentage of photons that would have been detected by the 2nd and counterfactual 3rd "element of reality" (detectors).
 
  • #521
I'm still a bit shocked at sentence 1):
"
DrChinese said:
OK, I am still calling you out on your comments about a photon not being able to be detected at more than 1 angle."

Let's enumerate sentences to the contrary in the specific post you responded to:
(Let's put the granddaddy of them first, even if out of occurrence order)

1) This argument is contingent upon a single assumption, that a single photon can 'sometimes' be 'counterfactually' detected by the same detector with a different polarization setting.

2) But if counterfactually both A and B was in principle capable of separately detecting that one photon, we are allowed to presume that only sometimes did A and B both see the photon (since we can call it both ways counterfactually), and sometimes not.

3) So if that counterfactual measure can sometimes see the same photon we are required to call that a separate element of reality per Bell Realism, even though it's the same photon.

4) It is these partial values between 0 and 1 that allows the same photon to 'sometimes' be counterfactually detected with multiple polarizer settings.

5) The one to one correspondence between an element of reality (photon) and a detection is broken (Bell realism), when counterfactually a different detector setting can sometimes detect the same photon, and sometimes not.

These are the sentences that explicitly require the opposite of what you claimed I said, but many more contingent upon it.
 
  • #522
my_wan said:
I'm still a bit shocked at sentence 1):
"

Let's enumerate sentences to the contrary in the specific post you responded to:
(Let's put the granddaddy of them first, even if out of occurrence order)

1) This argument is contingent upon a single assumption, that a single photon can 'sometimes' be 'counterfactually' detected by the same detector with a different polarization setting.

I am so confused at what you are asserting. :bugeye:

My version does not require counterfactuality. I can do the experiment all day long. A photon can be observed at any angle at any time. I can do angle A, then B, then C, then C again, etc. And I still have a photon. So again, I am calling you out: please quote a reference which describes the behavior you mention, and point to a spot in Bell where this is referenced. Or alternately say that it is your personal speculation.
 
  • #523
DrC,
This model I am describing is new to me, only occurred to during this debate a few days ago. Before then the contextuality issue was purely theoretical, however reasonable it appeared to me as at least possible. Now I'm trying to express it as it's being investigated. I'm quiet aware that I haven't been completely lucid in my account of it in all cases, but it seemed to me that the underlying idea should be fairly clear. Maybe that's just a perspective though.

Regardless, debating you has been far more fruitful and informative than I could possibly have hoped. It's rare for me to have the pleasure of such a worthy debate. The science certainly will not be decided by this debate, and to declare a winner or loser would not be science by any stretch of the imagination.

Beyond the rebuttals I supplied, which I found to be reasonably, and stick to my limited proof claim, this is quickly turning into independent research for me. Due to my newfound 'definable' contextuality scheme. So I'll answer questions if interested, but this debate does not exist to win, rather for learning, and I have learned more than I could have hoped, thanks to you. If I'm not expressing myself clearly enough to get the quality rebuttals the debate started with, it's time I run with my newfound understanding, and put my money where my mouth is. Thank you for such a worthy debate.

P.S. :-p
I understand that your counterfactual C can be run as a separate experiment. But when counterfactually matching it against the previously 'actual' experiment there's a crossover in certain 'instances' where sometimes the photons from B and C should show up as common events (where B and C are calling the same events distinct). Whereas in the individual experiments they were indeed distinct elements of reality. Your calculation, in my interpretation was a statistical count of the percentage of common events to B and C, assuming C is measured on the B side for purposes of definition.

Hopefully that might help, but it's time for me to do something more real than debate it. The computer modeling sounds interesting. :biggrin: If it works the way I hope, I should be able to emulate an expansion series, and express photons as large base 2 quasi-random numbers in a text file. Kind of a finite way to emulate a single quantum bit, with a probability function built into the random variance of a long 0/1 binary sequence. I'll have to limit the angle setting to half angle increments to keep photon number a reasonable size. The code should be simple enough, but why it works, if it does, may still be confusing just from reading the source. But now I'm just running my mouth.

Thanks, it was not only your modeling on your website, but your forcing me to face a perspective other than my own, that give me a new toy that might even pay. At least learn from. I'm off to play.
 
  • #524
DrChinese said:
I would tend to agree. FTL seems to fill in the cause. As I understand the Bohmian program, it is ultimately deterministic. Randomness results from stochastic elements.

Yes, and if FTL brings cause to Bell test experiments, then either Bell's theorem or FTL goes in the paper bin.

And there seems to be additional dark clouds, gathering up on the "Bell sky"...
(original paper from your site)
http://www.drchinese.com/David/Bell_Compact.pdf"
John S. Bell
...

VI. Conclusion
In a theory in which parameters are added to quantum mechanics to determine the results of individual measurements, without changing the statistical predictions, there must be a mechanism whereby the setting of one measuring device can influence the reading of another instrument, however remote. Moreover, the signal involved must propagate instantaneously, so that such a theory could not be Lorentz invariant.

Of course, the situation is different if the quantum mechanical predictions are of limited validity. Conceivably they might apply only to experiments in which the settings of the instruments are made sufficiently in advance to allow them to reach some mutual rapport by exchange of signals with velocity less than or equal to that of light. In that connection, experiments of the type proposed by Bohm and Aharonov [6], in which the settings are changed during the flight of the particles, are crucial.

Instantaneously!? Not Lorentz invariant! ?:confused:?

Not only has QM non-locality 'problems', here goes Einstein, SR and RoS down the drain??

I have absolutely no idea what to think... we must all have missed something very crucial... because all this is too strange to be true... :eek:
 
Last edited by a moderator:
  • #525
my_wan said:
...Regardless, debating you has been far more fruitful and informative than I could possibly have hoped. It's rare for me to have the pleasure of such a worthy debate. ... Thanks, it was not only your modeling on your website, but your forcing me to face a perspective other than my own, that give me a new toy that might even pay. At least learn from. I'm off to play.

I am glad if I was a help in any small way. The point is often to address a different perspective, and in that regard I benefit too. :smile:
 
  • #526
DevilsAvocado said:
Yes, and if FTL brings cause to Bell test experiments, then either Bell's theorem or FTL goes in the paper bin.

And there seems to be additional dark clouds, gathering up on the "Bell sky"...
(original paper from your site)

Be careful of Bell's comment, which can be EASILY misinterpreted:

"Moreover, the signal involved must propagate instantaneously, so that such a theory could not be Lorentz invariant."

This ONLY applies when coupled with: "In a theory in which parameters are added to quantum mechanics to determine the results of individual measurements..." which is the REALISM requirement.

There is another time in the paper in which a similar dichotomy appears, which also can be read as indicating he is on the non-local side of things. In actuality, he personally went back and forth a bit. But his opinion is not the proper conclusion of the paper, you must stop at the local OR hidden variable point. Does that make sense?
 
  • #527
RUTA said:
You can entangle atoms that have not interacted with each other by using interaction-free measurement in an interferometer. Accordingly, these atoms don't interact with the photon in the interferometer either.
This is very interesting (reference, link?) and I'd like to learn the setup, but what does this have to do with what I said wrt the FandC and Aspect experiments?

Please recall my statement that entanglement has to do with RELATIONSHIPS between and among the motional properties of entangled entities that result from these entities' interaction with each other or with a common disturbance, or having a common origin, or being part of an encompassing system. These RELATIONSHIPS, when subjected to physical analysis via global measurement parameters, are revealed in the form of correlations predicted by the QM formalism.

My contention is, of course, that any such entanglement RELATIONSHIPS, and observations of them, are compatible with the assumption of locality.

I don't consider it strange or weird at all that entities that have never interacted with each other can be related in a way or ways such that entanglement stats result when these relationships are observed in certain contexts. The orderliness of our universe suggests that a fundamental wave dynamic(s) underlies all of our universe's emergent complexity, and pervades every epistemic and ontic scale. The essence of entanglement is that everything is related to this fundamental dynamic(s).

I've never considered nonlocality or even ftl locality to be serious contenders in the effort to understand and explain EPR-Bell related conundra. For me it's always been about getting at the physical meaning of 'quantum nonseparability', which has to do with the nonseparability of the relationships between the entangled entities wrt the measurement parameters which reveal those relationships in the form of entanglement stats. So, as long as those relationships can be maintained, or insofar as they can be produced, in entities at great separations, then the revealing of the entanglement correlations via the joint analysis of the relationship(s) between those entities can be understood as evolving via local channels no matter how far apart the joint measurements are done.
 
  • #528
DrChinese said:
You apparently don't follow Mermin closely. He is as far from a local realist as it gets.

Jaynes is a more complicated affair. His Bell conclusions are far off the mark and are not accepted.
I want to say something else about this reply of yours.

It's in response to a rather long post where I laid out what I've been trying to say a bit more clearly, I think. Yet, instead of replying to the substance of the post, to the actual arguments regarding nonviability of lhv theorems per EPR-Bell, you chose this, peripheral, issue to reply to. Very curious.

I'm led to suppose that you're initimidated by the argument that I'm presenting. I'm supposing this because (1) you have yet to directly address it, and (2) in a post of yours you said that I was 'claiming victory' (though I've done no such thing).

So, what is it about the argument that you find so difficult? It can't be that you think that I'm advocating the possibility of lhv theories, because, as I've repeatedly stated, I'm not. In fact, the argument is telling you exactly why lhv theories are impossible. Of course if the argument is correct, then there's no basis for inferring nonlocality.

On the other hand, if the argument isn't difficult or subtle, and if it's obviously incorrect, then why not just refute it outright and maybe I can learn something (you know, point out the error in my thinking). Isn't that what a science advisor is supposed to do?

But instead you said this:

DrChinese said:
I am through discussing with you at this time. You haven't done your homework on any of the relevant issues and ignore my suggestions. I will continue to point out your flawed comments whenever I think a reader might actually mistake your commentary for standard physics.
This isn't about standard physics. I'm pretty sure we agree on the standard physics. We're not arguing about qm or even Bell's theorem, per se. This is an interpretational issue. The interpretation of the physical meaning of violations of BIs that's been presented happens to be based on standard physics. It simply points out a reason for the incompatibility between lhv formulations (as restricted by Bell and EPR elements of reality) that's been noticed by relatively few commentators on the subject. It makes nonlocality unnecessary. Nonlocality is, anyway, neither standard nor nonstandard physics. It isn't physics at all. It's just a word for ignorance of precisely why lhv theories are impossible and why BIs are violated. An interpretation and explanation for this has been presented which doesn't involve invoking nonlocality. So far it's gone unaddressed. Is it possible that what it entails (that the assumption of locality is compatible with the impossibility of lhv theories vis EPR, Bell, GHZ, etc.) isn't nonstandard enough?

Is it possible that Bell is right and nonlocality is wrong? Of course it is, and that's all that I'm saying.

DrChinese said:
In a local hidden variable model, each observer is measuring a separate reality. So there is no JOINT observable (or context).
That's right. (You're almost there.) But entanglement IS a JOINT observational context. (Let that sink in for a moment.)

Now, is what's being measured in the separate measurements at A and B the same as what's being measured jointly?
The answer is no. That's why I said:

ThomasT said:
.It should become clear that the variables which determine individual detection rates can't be made to (can't be put into a form which would) account for the joint detection rates, because they aren't the determining factors in that situation.

DrChinese said:
... in a local world, what happens here does not affect what happens there.
That's right. But there are only two values for |a - b| where A and B are perfectly correlated (anticorrelated), and these perfect correlations are compatible with the assumption that the relationship between the entangled photons has a local common cause.

But, you might counter, the full range of entanglement stats can't be reproduced by an lhv description of the joint context. And that's correct, but it's because what's being measured in the separate measurements at A and B is not the same as what's being measured jointly.

DrChinese said:
If there is a "joint detection parameter" observable, it is global. That does not work in a local world either.
It works in a local world. It just doesn't work in a local hidden variable theory per EPR-Bell. (1) The joint measurement parameter is |a - b|. (2) What |a - b| is measuring is the relationship between the counter-propagating disturbances. Both (1) and (2) are compatible with the assumption of c-limited locality. However, the relationship between the counter-propagating disturbances doesn't determine individual results.

DrChinese said:
So you may be correct ...
It is correct. But the presentation needs some refining.

DrChinese said:
... but you are not describing a local realistic model.
Hopefully it will become clear that I'm not trying to do that, but rather explain why such a model is impossible, and why the impossibility of constructing such a model doesn't imply nonlocality (or ftl info transfer).

To revisit the Unnikrishnan paper that you didn't want to look at, the purpose of presenting it was to illustrate the point(s) that I've been presenting, not to advocate it as an lhv theory candidate. If you look at it you'll see that it isn't an lhv model in the sense of EPR-Bell. The author even says as much. So it can be taken as further, indirect, evidence that lhv theories per EPR-Bell are impossible. But it is explicitly local. Hence the conclusion: Bell is correct AND nonlocality is obviated.

So, wrt this statement:
DrChinese said:
There can be no entanglement - in a local realistic world ...
I think that a better way to put it is that there can be no local realistic (per EPR-Bell) theories of entanglement in a local realistic world.
 
  • #529
glengarry said:
Not only is it "scientific" to posit that Nature is fundamentally nonlocal, it is also the only "logical" thing to do.
Regarding your lengthy, interesting, and well written post, I agree that nonlocality is a matter of convenience.

glengarry said:
And other than that, I am doing my best to continue the tradition of pushing towards a thoroughly believable ontological theory of physical reality here at physicsforums.
And a fine tradition it is. However, my immediate aim, although it might be compatible with this tradition, is simply to understand why lhv theories per Bell-EPR are impossible in a universe which seems to be evolving in accord with the principle of locality. And it turns out, it seems, that this is rather simply explained.
 
  • #530
ThomasT said:
1. I want to say something else about this reply of yours.

It's in response to a rather long post where I laid out what I've been trying to say a bit more clearly, I think. Yet, instead of replying to the substance of the post, to the actual arguments regarding nonviability of lhv theorems per EPR-Bell, you chose this, peripheral, issue to reply to. Very curious.

I'm led to suppose that you're initimidated by the argument that I'm presenting.

2. To revisit the Unnikrishnan paper that you didn't want to look at, the purpose of presenting it was to illustrate the point(s) that I've been presenting, not to advocate it as an lhv theory candidate. If you look at it you'll see that it isn't an lhv model in the sense of EPR-Bell. The author even says as much. So it can be taken as further, indirect, evidence that lhv theories per EPR-Bell are impossible. But it is explicitly local. Hence the conclusion: Bell is correct AND nonlocality is obviated.

So, wrt this statement:
I think that a better way to put it is that there can be no local realistic (per EPR-Bell) theories of entanglement in a local realistic world.

1. :-p :biggrin: You really are making me laff...

2. This paper does not offer a local realistic model. And your thoughts on non-locality are simply an opinion, much like any interpretation would be considered.
 
  • #531
DrChinese said:
Be careful of Bell's comment, which can be EASILY misinterpreted:

"Moreover, the signal involved must propagate instantaneously, so that such a theory could not be Lorentz invariant."

This ONLY applies when coupled with: "In a theory in which parameters are added to quantum mechanics to determine the results of individual measurements..." which is the REALISM requirement.

There is another time in the paper in which a similar dichotomy appears, which also can be read as indicating he is on the non-local side of things. In actuality, he personally went back and forth a bit. But his opinion is not the proper conclusion of the paper, you must stop at the local OR hidden variable point. Does that make sense?

To be honest – I’m wandering around in a "personal intellectual mud", up to my knees right now.

We’ve eliminated LHV, NLHV, FTL, Loopholes, Malus, etc, and stated that QM is correct.

What’s left!? How does QM solve this unsolvable problem?? I’m going crazy over here... :cry:

I will not believe in MWI before we get a "Hello world!" from a parallel universe (which could take 'awhile')...

What’s your solution??
 
  • #532
DrChinese said:
1. :-p :biggrin: You really are making me laff...
You're making my case for me. You still haven't addressed the argument(s).

DrChinese said:
2. This paper does not offer a local realistic model.
No kidding. Maybe you should read the paper, or what I said about it.

ThomasT said:
To revisit the Unnikrishnan paper that you didn't want to look at, the purpose of presenting it was to illustrate the point(s) that I've been presenting, not to advocate it as an lhv theory candidate. If you look at it you'll see that it isn't an lhv model in the sense of EPR-Bell. The author even says as much. So it can be taken as further, indirect, evidence that lhv theories per EPR-Bell are impossible. But it is explicitly local. Hence the conclusion: Bell is correct AND nonlocality is obviated.

DrChinese said:
And your thoughts on non-locality are simply an opinion, much like any interpretation would be considered.
Again, duh. What do you think your thoughts on nonlocality are?

Of course, if you won't even address the reasons behind the opinion ...
 
  • #533
DevilsAvocado said:
What’s your solution??

Easy, drink wine and listen to the Beatles.
 
  • #534
DrChinese said:
Easy, drink wine and listen to the Beatles.

HAHAHA! :smile: Non-local wine, right? And the Beatles from the surrealistic period, right? :smile::smile:
 
  • #535
DevilsAvocado said:
HAHAHA! :smile: Non-local wine, right? And the Beatles from the surrealistic period, right? :smile::smile:

I'm talking sitars!
 
  • #536
DrChinese said:
I'm talking sitars!

Yeah, I hear you!

ruqhjr.jpg


LMPO :biggrin:
 
  • #537
DrChinese said:
I am glad if I was a help in any small way. The point is often to address a different perspective, and in that regard I benefit too. :smile:
The help wasn't so small, it was instrumental.
 
  • #538
I would like to pick up this part of discussion as you made similar comment in other thread:
DrChinese said:
Don't you think the authors would be raising flags if the stats deviated from QM predictions by a significant amount?
I actually wrote Xian-Min Jin asking this question:
"The question is about calibration data of entangled photon source. And exactly this sentence: "The visibilities for the polarization correlations are about 98.1% for |H>/|V> basis and 92.6% for |+45°>/|−45°> basis, without the help of narrow bandwidth interference filters."
Two visibilities seem quite different. So could you please tell me what is the possible reason for this difference in two visibilities?"

And the answer he gave was:
"About the entanglement source, we employ type II SPDC phase match
to generate biphoton. The obtained two photons are either H1V2 or V1H2 with equal probability. So normally we can get very high visibility when we measure H/V basis. If we want make the two photons be entangled, we need to make the two possible events overlap very well at both spatial and temporal modes so that we can not distinguish them any more without measuring their polarization basis. Experimentally, we can not get so ideal condition, that means H1V2 and V1H2 are partially distinguishable. As a result, the entanglement visibility is limitted, this induce that we can not observe perfect correlation at +/- basis. In my experiment, actually, the visibility is considerablely high comparing with previous work, and sufficient for observation of photonics de Broglie wave."

So my answer regarding your comment is that QM prediction on more detailed level includes product state and entangled state as two extremes for the setup of entangled source.
QM prediction a la Bell is just that theoretically you can reach this entangled extreme for the case of efficient detection. And if you do not perform dedicated research you can never find out whether detection efficiency is one of the factors that influences quality of entanglement or not.
 
  • #539
zonde said:
1. I actually wrote Xian-Min Jin asking this question:
"The question is about calibration data of entangled photon source. And exactly this sentence: "The visibilities for the polarization correlations are about 98.1% for |H>/|V> basis and 92.6% for |+45°>/|−45°> basis, without the help of narrow bandwidth interference filters."
Two visibilities seem quite different. So could you please tell me what is the possible reason for this difference in two visibilities?"

And the answer he gave was:
"About the entanglement source, we employ type II SPDC phase match
to generate biphoton. The obtained two photons are either H1V2 or V1H2 with equal probability. So normally we can get very high visibility when we measure H/V basis. If we want make the two photons be entangled, we need to make the two possible events overlap very well at both spatial and temporal modes so that we can not distinguish them any more without measuring their polarization basis. Experimentally, we can not get so ideal condition, that means H1V2 and V1H2 are partially distinguishable. As a result, the entanglement visibility is limitted, this induce that we can not observe perfect correlation at +/- basis. In my experiment, actually, the visibility is considerablely high comparing with previous work, and sufficient for observation of photonics de Broglie wave."

2. So my answer regarding your comment is that QM prediction on more detailed level includes product state and entangled state as two extremes for the setup of entangled source.

QM prediction a la Bell is just that theoretically you can reach this entangled extreme for the case of efficient detection. And if you do not perform dedicated research you can never find out whether detection efficiency is one of the factors that influences quality of entanglement or not.

1. Great stuff, thank you for sharing his comments. I love hearing more details about these experiments.

2. The quality of entanglement can be measured by how close you come to perfect correlations when setting up the experiment. So you might expect that there is always a mix of ES> + PS> statistics (Entangled and Product). Ideally, ES is 100%. But clearly, that ideal is not met in this experiment and the result will be a deviation from the QM predicted rates accordingly. But not enough to cross back into the Local Realistic side of the Bell Inequality.

So are you saying that the detectors somehow influence this? I don't follow that point or what you think the implications would be. It is the setup that determines things, of which the detectors are an element. But their efficiency shouldn't matter to that setup.
 
  • #540
After reading about and debating this issue I began to question the difference in perspective that makes some so willing to question the interpretation of Bell's violations, while others see it as unavoidable. I'm not talking about those who simply refuse on the grounds it's too non-physical. It seems to me to involve different ways of thinking about what constitutes an element of reality. I think of emitted photons as conserved numbers of things, independent of what measurements seem to imply. The converse is to think in terms of the measurements as what's physically real, and assume properties back from that. I can think of countless measurable quantities which depend on elements of reality, but do not represent countable elements of reality.

Consider temperature, easily measurable. We know it's the average momentum of molecular collisions, but temperature alone tells us nothing about the number of particles involved. With the Mole unit we know it's mass, but the notion of a single basic unit of existential mass is speculative. Temperature doesn't even tell us the state of matter at that temperature. Some liquid, some solid, some gas, and some plasma. The notion of Bell realism notion seems a stretch, especially once QM is brought in the picture. I also understand it was used in EPR, and why. In some abstract sense we can make Planck's constant, quantum events, the fundamental unit. But as we'll see below this is not allowed under Bell's theorem.

I began reading this and it made some curious points:
Nonlocality, Bell's Ansatz and Probability - http://arxiv.org/abs/quant-ph/0602080

In section III it says this:
[PLAIN]http://arxiv.org/abs/quant-ph/0602080 said:
BELL'S[/PLAIN] intention when conceiving of his "proof" excluded insinuating, at the meta-level where the inequalities are being derived, any hypothesis not found in classical, local and realistic physics as it was understood before the discovery of QM, where the interpretation issues of QM do not exist. His explicit purpose was to examine the question of the existence of a covering theory that has just the structure exploited by classical, pre-quantum theories.
In fact my argument that EPR could be a local phenomena explicitly depends on the empirical reality of distinctly quantum effects. Albeit quantum effects that occurs distinctly LOCALLY at the particle detection points. Yet, according to this, that doesn't pass the muster for "local realism" for purposes of Bell's inequality. The notion that QM, which no reasonable person could empirically deny, is disallowed from consideration as a LOCAL mechanism for explaining violations of Bell's inequalities is empirically beyond the pale. It's tantamount to requiring a 'complete' classical theory of the entirety of QM to refute the non-local+non-realistic proof by Bell. Irrespective of whether the effects can be fully described by purely local 'quantum' effects. Entirely unreasonable, and empirically unjustifiable.

Here's another point that was fundamental to my argument:
[PLAIN]http://arxiv.org/abs/quant-ph/0602080 said:
Of[/PLAIN] course, what is not known in this case is the precise polarization of the signal comprising the pair as emitted at the source but before they reach the polarizer-filters. The polarizer settings can be known because they are inputs into measuring devices under the control of the experimenter who selects their orientation before the pair is generated at the source. Seen this way, it is absolutely clear that such detector settings have no effect on the source, and, therefore, have no effect on the pair of signals before they enter the polarizers.
Some may argue the effect issue claimed here, but again, the unknown polarization at the time of emission was a core feature of my argument. Though I did take the consequences much farther.

I do not wish to further defend my poorly explored interpretation at this point. Nor use it as an instrument to portray potentially false impressions of myself or others. But there's a few things to be learned from this debate, you can take to the bank. The mathematical legitimacy of Bell's theorem is irrefutable. The fact of this legitimacy does not translate to any fact of legitimacy about any given interpretation of what it means. The issues involved are open research, and nobody has all the answers, nor fully appreciates all nuances of alternative viewpoints and issues. If it was really that easy it wouldn't be an open area of research, and that's part of what makes it exciting and curious. Strong arguments for what may or may not ultimately be right can be made on both sides of the fence.
 
Last edited by a moderator:
  • #541
my_wan said:
... there's a few things to be learned from this debate, you can take to the bank. The mathematical legitimacy of Bell's theorem is irrefutable. The fact of this legitimacy does not translate to any fact of legitimacy about any given interpretation of what it means...

The mathematical legitimacy of Bell's theorem is irrefutable?

Does Bell use P(AB|H) = P(A|H).P(B|H)?

Is P(AB|H) = P(A|H).P(B|H) valid when A and B are correlated?

Are A and B correlated in EPR settings?
 
  • #542
JenniT said:
The mathematical legitimacy of Bell's theorem is irrefutable?

Does Bell use P(AB|H) = P(A|H).P(B|H)?

Is P(AB|H) = P(A|H).P(B|H) valid when A and B are correlated?

Are A and B correlated in EPR settings?

That depends on how you define H, the nature of the hidden variable that is presumed to be involved in determining the correlation effects between A and B. I would certainly say H is overly restrictive, even in a 'realistic' sense, but others disagree.

The physical validity doesn't have to be legitimate for the mathematical validity to hold, and models which are limited to H, as it is defined here, are indeed invalid. But I was satisfied with that on Neumann's argument alone. That's only the simplest unabashed classical approach anyway. There are plenty of issues with pre-quantum classical physics, from many areas not just restricted to QM, to justify modifications. Even if it still manages to remain essentially classical in character from some perspective. Even Newton had his critiques over the 'magical' elements of classical theory, and background dependence almost certainly has to go.
 
  • #543
DrChinese said:
2. The quality of entanglement can be measured by how close you come to perfect correlations when setting up the experiment. So you might expect that there is always a mix of ES> + PS> statistics (Entangled and Product). Ideally, ES is 100%. But clearly, that ideal is not met in this experiment and the result will be a deviation from the QM predicted rates accordingly. But not enough to cross back into the Local Realistic side of the Bell Inequality.
It is not exactly deviation from QM. You see QM covers this PS> state too. So you don't need to resort to some other idea (LHV or anything) in any case.

I have posted this formula couple of times but maybe it will make more sense now in conjunction with real experimental setup.
P_{VV}(\alpha,\beta) = \underset{product\; terms}{\underline{sin^{2}\alpha\, sin^{2}\beta + cos^{2}\alpha\, cos^{2}\beta}} + \underset{interference\; term}{\underline{\frac{1}{4}sin 2\alpha\, sin 2\beta\, \mathbf{cos\phi}}}
This is a bit reduced (without \theta_{l} factor) equation (9) from paper - http://arxiv.org/abs/quant-ph/0205171/" that describes type-I PDC source.
The same way can be described type-II PDC. I found this out from Kwiat et al "New High-Intensity Source of Polarization-Entangled Photon Pairs" (I won't post the link to be on the safe side with forum rules about copyrights). There equation (1) is:
|\psi\rangle=(|H_{1},V_{2}\rangle+e^{i\alpha}|V_{1},H_{2}\rangle)/\sqrt{2}
that is basically the same equation but in more QM format.

As you can see from this first formula cos\phi acts as coefficient in range from -1 to 1 and accordingly this interference term can change it's weight between maximally negative, none at all and maximally positive. QM does not place any restrictions on that.
So if interference term becomes zero and photon state reduces to completely local realistic product state it's still covered by this QM description.
Physical interpretation in QM about this cos\phi coefficient is that it characterizes transverse and longitudinal (temporal) walkoffs.

As experimenter you have a goal to get this cos\phi maximally close to either 1 or -1 and if you do not succeed for some reason then interpretation says you have not compensated those walkoffs to satisfactory level.

DrChinese said:
So are you saying that the detectors somehow influence this? I don't follow that point or what you think the implications would be. It is the setup that determines things, of which the detectors are an element. But their efficiency shouldn't matter to that setup.
It's hard for me to say something about your comment that efficiency shouldn't be a factor. That's because since some time for me it's not the question of "if" but rather "how". And to be precise it's not only efficiency of detectors but rather coincidence detection efficiency of the setup as whole.

But more to the point, I interpret this interference term as correlation in samples of detected photons meaning that they are uneven. If this unevenness is similar we have positive interference term, if this similarity is inverted we have negative interference term and if we have this unevenness in independent "directions" we don't have interference term. Obviously for efficient detection any "direction" in unevenness of sample is no more detectable.

This loss of information for efficient detection can be illustrated with example like this. Let's say we have a box with different objects in it. We have hole in the box and if we shake the box some objects fall out. Afterward we can look at the objects that are outside the box and objects that are left inside. So we can find out some probabilities whether particular object is more likely to fall out of the box or stay inside. If we always shake the box until all the objects fall out of the box (efficient detection) we loose any information about that falling out probability.
 
Last edited by a moderator:
  • #544
DevilsAvocado said:
What’s left!? How does QM solve this unsolvable problem?? I’m going crazy over here... :cry:

May be I missed something. What is a problem? Nature is not local. I am ok with it.
 
  • #545
JenniT said:
The mathematical legitimacy of Bell's theorem is irrefutable?

Does Bell use P(AB|H) = P(A|H).P(B|H)?

Is P(AB|H) = P(A|H).P(B|H) valid when A and B are correlated?

You do NOT need the probability formula to get the Bell result. Despite some of the posts you may have seen, you can get it a variety of ways. For example: if you accept that:

0<=P(A|H)<=1
0<=P(B|H)<=1
0<=P(C|H)<=1

Then you can derive the formula too. A, B and C can be correlated in any way you like. Because then you have:

0<=P(AB|H)<=1
0<=P(AC|H)<=1
0<=P(BC|H)<=1

and then:

0<=P(ABC|H)<=1

But as I have shown previously, this value is less than -.1 (i.e. less than -10%) for some ABC combinations if the QM predictions are substituted. Obviously, a negative value for P(ABC|H) contradicts the above.
 
  • #546
Dmitry67 said:
May be I missed something. What is a problem? Nature is not local. I am ok with it.

Hi Dmitry67, the 'problem' is that you believe in MWI, which I don’t, unless you show me a "Hello world!" form one of those >centillion1000 parallel universes! :wink:
 
  • #547
JenniT said:
The mathematical legitimacy of Bell's theorem is irrefutable?
Wrt the inequalities it is.

JenniT said:
Does Bell use P(AB|H) = P(A|H).P(B|H)?
Yes.

JenniT said:
Is P(AB|H) = P(A|H).P(B|H) valid when A and B are correlated?
No.

JenniT said:
Are A and B correlated in EPR settings?
Yes.
 
  • #548
I wonder if this will make the counterfactual assumptions clearer?

You have:
0<=P(A|H)<=1
0<=P(B|H)<=1
0<=P(C|H)<=1

From which this is derived:
0<=P(AB|H)<=1
0<=P(AC|H)<=1
0<=P(BC|H)<=1

But the P(BC|H) case was never performed in tandem, rather constructed from actual measures P(AB|H) and P(AC|H), and even P(BC|H) for good measure, because the correlations come in pairs. Suppose P(AB|H) and P(AC|H) was constructed from a dataset of 1500 correlations pairs each, 3000 photon count "elements of reality" per detector, 6000 total. Now when you combine B and C, you are adding 1500 pairs of "elements of reality" (3000 total) that never actually existed simultaneously but presumably could. By counterfactually assuming they simultaneously occurred in the same dataset, if C can detect the same "elements of reality" (photon) in some, but not all, cases, it becomes impossible to get the "elements of reality", as defined by the measurements, to equal the "elements of reality" as defined by the number of photons.

This is only a valid concern, if and only if, C is sometimes selecting photons that would have also been selected by B, and visa versa, such that if emitted photons AND detections are both to labelled "elements of reality", the count between the two cannot possibly match. In the combined counterfactual case, B and C can effectively be viewed as the exact same detector with 2 different detection settings at once.

If Malus' Law is a valid in defining the odds of a single photon being detectable with two different detector settings, such that a photon with a specific polarization has a 50% chance of passing a polarizer set 45 degrees to its 'actual' polarization, then the derivation of the mismatch in these two ways of counting the "element of reality" almost exactly matched the negative probabilities derivation.

The only difference is that you take the counterfactual "element of reality" set BC, which is 2 settings of the same detector counting the same set of photons, and subtract the total of both the AB and AC set, and you have the percentage of the detector event defined "elements of reality" minus the photon count defined "elements of reality". Z - (X + Y). Divide by 2 to get a per detector percentage, B and C.

I'm not trying to argue this atm, but it would be cool to make the case clear enough to get some effective criticism.
 
  • #549
DevilsAvocado said:
Hi Dmitry67, the 'problem' is that you believe in MWI, which I don’t, unless you show me a "Hello world!" form one of those >centillion1000 parallel universes! :wink:

Whats about BM?
Wavefunction is, in any case, non-local.
So MWI or not, nonlocality is inevitable.

I see causality as emergent property of macroscopic world. In that case a-causality is more fundamental, and we are just lucky that our world has causality in IR (macroscopic) limit.

It is curious that the opposite way of thinking is common: "wow, how nature can be non-local! I can't believe!". For me the deeper mystery is why it is causal.
 
  • #550
my_wan said:
If Malus' Law is a valid in defining the odds of a single photon being detectable with two different detector settings, such that a photon with a specific polarization has a 50% chance of passing a polarizer set 45 degrees to its 'actual' polarization, then the derivation of the mismatch in these two ways of counting the "element of reality" almost exactly matched the negative probabilities derivation.

The only difference is that you take the counterfactual "element of reality" set BC, which is 2 settings of the same detector counting the same set of photons, and subtract the total of both the AB and AC set, and you have the percentage of the detector event defined "elements of reality" minus the photon count defined "elements of reality". Z - (X + Y). Divide by 2 to get a per detector percentage, B and C.

You cannot get "close" to the negative probability derivation as long as you cling to the idea that:

0<=P(A|H)<=1
0<=P(B|H)<=1
0<=P(C|H)<=1
and Malus.
 

Similar threads

Replies
45
Views
3K
Replies
4
Views
1K
Replies
18
Views
3K
Replies
6
Views
2K
Replies
2
Views
2K
Replies
100
Views
10K
Back
Top