Is action at a distance possible as envisaged by the EPR Paradox.

In summary: QM?In summary, John Bell was not a big fan of QM. He thought it was premature, and that the theory didn't yet meet the standard of predictability set by Einstein.
  • #246
DrChinese said:
In Relational BlockWorld, I like to say the the hidden variables lay in the future. RUTA will probably choke on that description. :smile: If you accept that, then you would probably end up concluding that the future influences the past and causality is lost. RUTA would probably be OK with that, because he considers RBW to be acausal. :smile:

If you want to characterize future experimental outcomes as the "hidden variables" needed to understand EPR-Bell phenomena per RBW, then what you said is absolutely true.
 
Physics news on Phys.org
  • #247
RUTA said:
If you want to characterize future experimental outcomes as the "hidden variables" needed to understand EPR-Bell phenomena per RBW, then what you said is absolutely true.

That is what I meant, but you said it better. It is almost as if there are many little hands reaching out from the past, waiting for a handshake before their impact is finalized. Gee, now I am starting to sound like Yoda Jedi.
 
  • #248
DrChinese said:
That is what I meant, but you said it better. It is almost as if there are many little hands reaching out from the past, waiting for a handshake before their impact is finalized. Gee, now I am starting to sound like Yoda Jedi.

Please don't do that, I can only manage one throbbing headache at a time. ;)
 
  • #249
DrChinese said:
That is what I meant, but you said it better. It is almost as if there are many little hands reaching out from the past, waiting for a handshake before their impact is finalized. Gee, now I am starting to sound like Yoda Jedi.

That's Cramer's Transactional Interpretation. In TI, there is literally a wave "coming from" the future experimental outcomes to interact with the wave leaving the source at the beginning of the experiment. That's how they get the Born rule. Of course, if there are waves coming from the future to influence the present, then the future is already "there." And, since the present is the future of the past, the past must also be "there." That's the blockworld wherein "nothing happens." Here's a nice quote from Geroch (General Relativity from A to B, University of Chicago Press, Chicago, 1978, pp. 20-21):

There is no dynamics within space-time itself: nothing ever moves therein; nothing happens; nothing changes. In particular, one does not think of particles as moving through space-time, or as following along their world-lines. Rather, particles are just in space-time, once and for all, and the world-line represents, all at once, the complete life history of the particle.

So, why bother trying to tell "stories" about "the future influencing the past?"
 
  • #250
IcedEcliptic said:
Why do you believe that?
I, and others, think the non-viability of lhv representations is due to a problem with the formal requirements not fitting the experimental situations, and, if that's so, then ftl 'explanations' for BI violations (and the correlations) are obviated.
 
Last edited:
  • #251
DevilsAvocado said:
ThomasT, I must say that it’s not only "Spukhafte Fernwirkung" that’s a mystery to me – your 'interpretation' of EPR & Bell test experiments is a 'mystery' as well (no offence).
None taken. :smile: Spooky action at a distance is just a collection of terms that has no physical meaning. I'm pretty sure that EPR meant it facetiously. If what I'm saying about EPR and Bell is a mystery to you, then all I can tell you is to keep studying and thinking about this stuff and what I'm saying about it will eventually make sense to you -- even though you might still disagree with what I'm saying about it. In any case, I think we're both fascinated by the mysteries of the quantum realm, and that's a key ingredient in motivating one to learn more about this stuff.

DevilsAvocado said:
First: When talking about angle (and settings), I think that most interested folks here understand that it’s the angles of the analyzers we are talking about, and not LHV. When talking about 'pre-agreement' and LHV, it’s the 'presetting' of the particle spin (of the pair) that’s addressed, which can be spin up(+) or spin down(-).
Where do you think it's most logical to assume that the relationship between the counter-propagating photons is created -- (1) during the emission processes which are so carefully and subtley prepared by the experimenters with the intention of doing just that, or (2) sometime after emission due to ftl communication between the photons?

Part of what I'm saying is that choosing (1) is ok because neither Bell nor GHZ rule it out. And if (1) is ok, then (2) isn't warranted.

DevilsAvocado said:
I agree, we really don’t know exactly what’s going on. There are different interpretations, trying to explain, but no 'official explanation'.
Agreed.

DevilsAvocado said:
I do hope that we all agree that 'something happens' that seems to violate locality ... ...
The discussion in this thread is centered on the fact that we don't all agree that something happens (wrt quantum entanglement) to violate c-limited locality.
 
  • #252
DrChinese said:
That sounds all well and good, but:

0, 120, 240: give me the dataset. The rest is just words, like "pigs fly". Easy to say, give me an example that addresses these. I learned this from Bell, so if it doesn't apply, it should be easy to come up with.
I think we agree that viable lhv accounts of entanglement are impossible. The question is why. This is what we're trying to sort out.

You're saying that it might be due to nonlocal effects of one side of the experimental setup on the other. I'm saying that not only is that not a physical explanation (equivalent to "pigs fly"), but also that there is a simpler, physical/formal, explanation for why Bell inequalities are violated experimentally and for the lhv inconsistencies via the GHZ theorem -- and that it might have to do with requiring the joint context to be represented by a hidden variable which is irrelevant wrt determining the results of the joint context.

The "give me the dataset" question you're asking has to do with the simplest version of Bell's theorem, the simplest Bell's inequality (which I referred to in the recent, really long, thread ostensibly dealing with the fair sampling loophole -- which loophole we also agree doesn't matter wrt determining the meaning of BI violations and GHZ inconsistencies). This is why I bring up the cos^2 Theta rule wrt optical Bell tests where it clearly does apply. There's simply no reason to assume that the application of this optical law isn't in accord with c-limited local causality. And yet, the interpretation of Bell's theorem which has Bell's theorem pertaining to what does or doesn't exist in reality, wrt this situation, says, in effect, that "the correlation between the angular difference of the polarizers and the joint detection rate, formally expressed in terms of the local hidden variable, can't possibly duplicate the full range of qm results -- ie., the full cos^2 Theta angular dependency -- IF THE ENTANGLEMENT OF THE OPTICAL DISTURBANCES HAS A LOCAL COMMON CAUSE.

Now, if you delete the part in caps, then I agree. The joint detection rate can't possibly be viably expressed in terms of the LOCAL hidden variable (which entails that it can't possibly be viably expressed in a separable form, a form that is factorable into an expression of the individual detection rates). This is because the determining factors in the joint context are common, or (despite your objection to this terminology) global, variables: 1) the angular difference between the polarizers, and 2) the angular difference between the optical disturbance incident on the polarizer setting at one end and the optical disturbance incident on the polarizer setting at the other end.

This is, I think, the correct physical interpretation of the formal expression of 'quantum nonseparability' for the experimental situation under consideration. And, as you can see, it has nothing to do with nonlocal or ftl 'influences' between entangled photons.
 
  • #253
DevilsAvocado said:
... ThomasT has his own version of the 'scientific model': First you decide how the world should work according to your personal taste, and nothing else – then you make up pseudo-mathematical theories that seems to fit your personal view.

And when ThomasT is proven wrong – he just refuse to reply – blaming on things being a bit tedious.

And I agree – this is getting tedious – in the manner this debate is performed.

Finally: I must point out that I from the beginning had the same view as ThomasT – this "spooky action at a distance" can’t be true! This is just mathematical mumbo-jumbo from physicist trying to raise more funding by presenting spectacular theories!
If you're interested in learning, as I am, then a little silliness is ok, and even somewhat refreshing when one get's saturated with the stuff that's being considered. However, stuff like the above is not ok. Not only is it a false personal attack, but, more importantly, it doesn't further the discussion.

Clearly, we're all operating from some degree of ignorance. A primary function of this forum (and science in general) is to help us learn about the world.

Learning sometimes requires actually thinking.

If you spend all your time dealing with youtube videos, cheerleading, criticizing, fashioning tin foil regalia for you and your pets, snipping quotes here and there, etc., etc. -- basically anything but actually thinking about and researching the stuff you're commenting on -- then ... well, your (mostly off-topic) posts speak for themselves.

Now, there's a legitimate consideration to explore, a question about the applicability of Bell's (and GHZ's) formal requirements/constraints that might obviate ftl 'influences' between entangled particles, that hasn't been definitively resolved.
 
  • #254
DrC,
There's nothing in your response that I factually disagree with, but I'm going to make some points wrt semantics and the importance of dealing with them. Your certainly not the primary audience I have in mind, but hopefully it'll provide food for thought.

DrChinese said:
I realize there may be some differences in what kind of hidden variables might exist. My thing is to avoid getting into a semantic argument (I would rather focus on the physics).
Yes I generally support this sentiment. It's more difficult in the context of EPR because the physics we have merely defines constraints of a presumably unknown model, which is dependent on ontological features which are semantically defined. What I find most distasteful in this context is singling out an ontology, and making ad hoc demands and rejections of the physics we do have on those grounds. People lose winnable debates doing this all the time by overstating both their positive and negative claims.

DrChinese said:
The key to that is to FIRST accept that there cannot be local hidden variables of the type identified by EPR (i.e. no objective elements of reality).
Again I absolutely agree, the constraints imposed by BI are absolutely real, and constitute some real physical constraints we have to work with in this area. Denial should be and is costly for those who do so. However, many people are predisposed to ontologically invert the words used here, as I'll articulate next. Debates that fail to recognize this do get painful.

DrChinese said:
I think you can then move on to extend the scope further, to include hypothetical classical components of elements of reality (i.e. where the element of reality is an observable, but the component may not be).
This is the type of model I like playing with. The semantics issues arise when you ask: Is it the elements postulated to be ontically real and remain unobservable that are the "elements of reality", or is it the measurable variables for which their existence is dependent on the relative configuration space of the ontic elements? Einstein realism is best served by the first, empiricism by the second. In fact there is no physical significants to these ontological distinctions whatsoever, and those seeking Einstein realism would be well served to recognize the empirical perspective. Pure empiricism may have limits, but it forever remains the sole source of cogency, legitimacy, for any theoretical model. Arguing the absolute legitimacy of one ontology over the other in itself is a no-go. If we accepted raw claims like this we'd still be stuck on Aristotle. Hence your focus on the physics is far more than justified, just not entirely feasible in the face of unknowns or what specific constraints on such unknowns actually entails.

DrChinese said:
In Relational BlockWorld, I like to say the the hidden variables lay in the future. RUTA will probably choke on that description. :smile: If you accept that, then you would probably end up concluding that the future influences the past and causality is lost. RUTA would probably be OK with that, because he considers RBW to be acausal. :smile:
[PLAIN]https://www.physicsforums.com/images/smilies/laughing.gif
I do find it an ugly distortion of my ontological predispositions, but for the reasons I provided above I'm still undecided how it'll fair under various ontological transforms. RUTA's response was a bit predictable, RUTA appears quiet adept at navigating these ontological mine fields. Makes it all the more fascinating.

DrChinese said:
There also could be all kinds of weird rules at the subatomic level that are hidden from us. But the problem with that "escape" is that where else do they manifest themselves? Were there other evidence, it would make more sense.
Yep, the unknown is a beast. I can't rightly or legitimately get into much detail of my own perspective in this thread, but I generally tend to think complex rule sets indicate the need to look deeper for simpler ones. The range of empirical data involved is extensive, for which EPR is just a small piece. I'm not really happy with any mere interpretation, or anything short of unification. I also tend to find ad hoc rules crafted solely to sweep under the rug, make unobservable, problem issues highly distasteful. Unfortunately I can't honestly yell BS either without something better on empirical, not ontological, grounds.
 
Last edited by a moderator:
  • #255
ThomasT said:
I, and others, think the non-viability of lhv representations is due to a problem with the formal requirements not fitting the experimental situations, and, if that's so, then ftl 'explanations' for BI violations (and the correlations) are obviated.

What others? What separation of the source would satisfy?
 
  • #256
ThomasT please explain – In what way is this ok, and furthering the discussion:
ThomasT said:
... If you spend all your time dealing with youtube videos, cheerleading, criticizing, fashioning tin foil regalia for you and your pets, snipping quotes here and there, etc., etc. -- basically anything but actually thinking about and researching the stuff you're commenting on -- then ... well, your (mostly off-topic) posts speak for themselves. ...


Criticizing? You’re talking about this?
ThomasT said:
Nice rant, but (1) I didn't say anything about Bell test loopholes, ...


Tin foil regalia? Do you mean this initial occasion of derogatory insinuation?
zonde said:
... There is nice picture that I spied in another thread: ...


YouTube videos? Well, I have to pass that complaint to PF ADMIN, since there is clearly a function for embedded YouTube videos in the editor, which most probably is meant to be used.

ThomasT said:
... basically anything but actually thinking about and researching the stuff you're commenting on ...
I haven’t fully understood your implementation of the 18th century optical law yet, but you must clearly be exaggerating – in claiming that it can be used for mind reading as well??


I think that those who followed this thread from start clearly can see what’s true or false in everything that ThomasT is claiming.


Finally I must inform any other reader that dear old ThomasT deliberately has distorted the quoting. If ThomasT does this again, I will report him, since this is clearly a violation of Physics Forums Global Guidelines:

"When you quote from a post, please delete large sections that are not directly relevant to your response, to make reading easier, but do not distort the original poster's meaning in the process."

Here is the correct quote:
DevilsAvocado said:
... Finally: I must point out that I from the beginning had the same view as ThomasT – this "spooky action at a distance" can’t be true! This is just mathematical mumbo-jumbo from physicist trying to raise more funding by presenting spectacular theories!

But I changed my mind. And I can assure ThomasT – it didn’t hurt at all...
 
Last edited:
  • #257
I wouldn't want to be without DA's contributions to this thread; he brings an element of levity and concise thought that can be lost sometimes. RUTA, you and Dr C really make this thread perfect for me, but I must be honest and say ThomasT, you seem to be picking a fight here. I know that, because I've picked a few in my time as well (as indicated by the line through my name for 10 days). I don't believe DA is being mean, he's just expressing what RUTA did very simply: your point is not scientific, not in the spirit of inquiry into this matter, and you refuse or are unable to share a theory of your own to counter the matters at hand.

The issue here is not: "ftl" anything when considered within the QM framework. Is there a better theory yet to come? Sure! Is it here yet? No, and until it is this kind of random challenge to otherwise reasonable and well accepted points is fruitless. Can there be a fair sampling that would satisfy you? Why don't you share your view, with the math, rather than simply verbally dissecting those of others?
 
  • #258
Thank you so very much Frame Dragger. You are much too kind.
 
  • #259
DrChinese said:
That isn't so. There is absolutely no evidence (cite it if you think I am wrong) whatsoever that the classical Product state is the limit as efficiency approaches 100%.
http://arxiv.org/abs/1005.0802"
From this paper:
"The count rate is about 80k/s in each arm, and the coincidence is about 20k pairs per second. As a result, we prepare the polarization entanglement as

[tex]|\Phi^{+}\rangle_{s}=1/\sqrt{2}\;(|H\rangle_{1}|H\rangle_{2}+|V\rangle_{1}|V\rangle_{2})[/tex], (1)

where where [tex]|H\rangle(|V\rangle)[/tex] denotes horizontal (vertical) polarization, the subscripts 1 and 2 specify spatial modes, and subscript s means state of source. The visibilities for the polarization correlations are about 98.1% for [tex]|H\rangle/|V\rangle[/tex] basis and 92.6% for [tex]|+45^{\circ}\rangle/|-45^{\circ}\rangle[/tex] basis, without the help of narrow bandwidth interference filters."

From first sentence we can calculate that detection efficiency is about 25% (20k / 80k per second).
Quasi-decoherence (deviation from perfect 100% case) is 1.9% for [tex]|H\rangle/|V\rangle[/tex] measurement but 7.4% for [tex]|+45^{\circ}\rangle/|-45^{\circ}\rangle[/tex] measurement.

Of course it is hard to tell what is the reason for that difference without explicitly testing what affects this quasi-decoherence. But let me say this that way "this observation is in agreement with hypothesis that classical product state is the limit as efficiency approaches 100%".
Where usual QM interpretation does not predict any difference between decoherence for those two measurements in ideal case. You can however hypothesize that there where imperfections in this setup like angle of incident beam with PDC crystal was not ideal and things like that.
 
Last edited by a moderator:
  • #260
zonde said:
http://arxiv.org/abs/1005.0802"
From this paper:
"The count rate is about 80k/s in each arm, and the coincidence is about 20k pairs per second. As a result, we prepare the polarization entanglement as

[tex]|\Phi^{+}\rangle_{s}=1/\sqrt{2}\;(|H\rangle_{1}|H\rangle_{2}+|V\rangle_{1}|V\rangle_{2})[/tex], (1)

where where [tex]|H\rangle(|V\rangle)[/tex] denotes horizontal (vertical) polarization, the subscripts 1 and 2 specify spatial modes, and subscript s means state of source. The visibilities for the polarization correlations are about 98.1% for [tex]|H\rangle/|V\rangle[/tex] basis and 92.6% for [tex]|+45^{\circ}\rangle/|-45^{\circ}\rangle[/tex] basis, without the help of narrow bandwidth interference filters."

From first sentence we can calculate that detection efficiency is about 25% (20k / 80k per second).
Quasi-decoherence (deviation from perfect 100% case) is 1.9% for [tex]|H\rangle/|V\rangle[/tex] measurement but 7.4% for [tex]|+45^{\circ}\rangle/|-45^{\circ}\rangle[/tex] measurement.

Of course it is hard to tell what is the reason for that difference without explicitly testing what affects this quasi-decoherence. But let me say this that way "this observation is in agreement with hypothesis that classical product state is the limit as efficiency approaches 100%".
Where usual QM interpretation does not predict any difference between decoherence for those two measurements in ideal case. You can however hypothesize that there where imperfections in this setup like angle of incident beam with PDC crystal was not ideal and things like that.

Where in the paper does it say ANYTHING remotely similar to the idea that the Product State statistics are approached?

By way of example: at 0 degrees, the Product State is 25.0% and the stated observation was apparently 1.9%. Does not seem too close. At 45 degrees, the Product State value should be 50.0% and the actual was apparently 42.6%.

Don't you think the authors would be raising flags if the stats deviated from QM predictions by a significant amount?

By the way, the 25% detection stat is a bit deceiving. That is because the value is net. Net meaning for both detectors jointly. Obvioulsy, there are a lot of unmatched hits too. I would estimate the gross efficiency at close to 50% (since 50%^2 = 25%).
 
Last edited by a moderator:
  • #261
my_wan said:
dependent on ontological features which are semantically defined. singling out an ontology, and making ad hoc demands.

assigning properties to ontology that is independent of us.
 
Last edited:
  • #262
Interesting and entertaining (at times) discussion. :)

It's clear that Bell Test (violation) experiments are correct (to anyone sensible), and it's also clear that Special Relativity is correct, so to account for entanglement we need either "magic" or a FTL causal mechanism that doesn't contradict SR.

A causal mechanism that doesn't contradict SR would mean it couldn't interact with any classical matter in any classically known way, but that's not such a big deal or even unusual, since (for example) Evolution created our consciousness and that seems be non-classical (and the Blind Watchmaker is skillful but She's not a magician ;) )

It seems to me that this (controversial) piece of the puzzle could be eliminated or confirmed by devising tests to demonstrate a (hypothesised) finite limit on the speed of the entanglement correlations

(I'm hypothesising that entanglement is due to FTL signalling and is not an instantaneous event)

There are two obvious ways I can see how this might be observed.

1. Adapt/Refine the tests which close the communication loophole so that they report an upper bound on the possible speed that ftl "signalling" occurs

2. Increase the number of bits in experimental quantum computers until the finite bound becomes noticeable due to delays in calculations

1 may be practical if the upper bound on the "signalling" speed is reasonable (please god ;) ) , say c^k for some lowish value of k, 2 is probably impractical but each bit requires an exponential increase in the signalling path, so might become noticeable earlier than you might think.

Once confirmed, we could then work much more confidently on constructing a model of how such a signal might travel, and how it does not violate SR (it may be restricted to another dimension, where ftl isn't forbidden)

I really can't believe entanglement enables instantaneous correlations across unlimited space.
 
Last edited:
  • #263
unusualname said:
It seems to me that this (controversial) piece of the puzzle could be eliminated or confirmed by devising tests to demonstrate a (hypothesised) finite limit on the speed of the entanglement correlations

(I'm hypothesising that entanglement is due to FTL signalling and is not an instantaneous event)


Believe it or not, there have been such tests. They show that if there is an FTL influence, it must be at least 10,000 times the speed of light.

Source: http://arxiv.org/abs/0808.3316
 
Last edited:
  • #264
unusualname said:
... It seems to me that this (controversial) piece of the puzzle could be eliminated or confirmed by devising tests to demonstrate a (hypothesised) finite limit on the speed of the entanglement correlations ...

Welcome to PF unusualname! (that is an unusual name!? :bugeye:) :smile:

Very interesting, sound and constructive thoughts, which at least I welcome to this thread, for a change!

I do think that some of this has already been tested in setting a minimum lower bound of 10,000 times the speed of light (for those hypersensitive of quotes, I temporary recommend closed eyes or a different activity, because now I’m about to do one of those very troublesome Wikipedia quotes):
http://en.wikipedia.org/wiki/Quantu...speed.22_of_the_quantum_non-local_connection"
A 2008 quantum physics experiment performed in Geneva, Switzerland has determined that the "speed" of the quantum non-local connection (what Einstein called spooky action at a distance) has a minimum lower bound of 10,000 times the speed of light.[13] However, modern quantum physics cannot expect to determine the maximum given that we do not know the sufficient causal condition of the system we are proposing.


And here’s a link to the arXiv paper http://arxiv.org/abs/0808.3316" .

You were asking about the finite limit, and that seems yet to be a problem.

But, please elaborate your thoughts; it’s real refreshing with open-minded poster’s who like a fair and interesting discussion!

Edit: Ahhh! DrC beat me... sorry. (I’ll go and check your new FS gadget now)
 
Last edited by a moderator:
  • #265
DevilsAvocado said:
Edit: Ahhh! DrC beat me... sorry. (I’ll go and check your new FS gadget now)

I cheated, edited it... :smile:
 
  • #266
DrChinese said:
I cheated, edited it... :smile:
Hehe! Is that "RBW posting"!? :smile:
 
  • #267
DevilsAvocado said:
Hehe! Is that "RBW posting"!? :smile:

Definitely. :biggrin:
 
  • #268
DrChinese said:
Definitely. :biggrin:
Haha! We should revive "The Monty... Posting"...!? :rolleyes: (:biggrin:)
 
  • #269
10,000c? That's nothing, if god's being reasonable we might be lucky to get c^2 :)

Thanks for the link, I hadn't been aware of any specific results (although any of the experiments post-Aspect would have a roughly calculable lower bound, I assume)

I initially had the idea that the signal would travel between the particles along some kind of higher dimensional space closely tied to the classical 3d space traced out by each particle, but that doesn't tie in well with single particle interference effects which you would hope would be due to a similar mechanism.

But I don't think it's a highly subtle sub-planckian mechanism or anything else so devious, mainly because evolution is pretty straightforward and works with the simple tools and materials provided by the environment at granularity no worse than atomic level, and I am pretty convinced consciousness is related to entanglement.
 
Last edited:
  • #270
unusualname said:
It seems to me that this (controversial) piece of the puzzle could be eliminated or confirmed by devising tests to demonstrate a (hypothesised) finite limit on the speed of the entanglement correlations

(I'm hypothesising that entanglement is due to FTL signalling and is not an instantaneous event)

I really can't believe entanglement enables instantaneous correlations across unlimited space.

If the events are space-like related, even 1 m/s faster than c, there is a frame in which those events are simultaneous.
 
  • #271
RUTA said:
If the events are space-like related, even 1 m/s faster than c, there is a frame in which those events are simultaneous.

If you assume that the signal doesn't travel in classical space then its journey is not related to a SR reference frame, but you're right that there is a privileged frame in which it would appear (to the observer) that the (entangled) events had zero time between them (but the observer wouldn't be able to observe the "signal" traveling between the particles in any classical manner)

For 3 or more particle entanglements (eg quantum computer) you wouldn't have such a privileged reference frame such that all entanglement events were simultaneous.

I need to read that preprint linked to above carefully to understand the importance of the privileged reference frame they mention (apparently it's crucial to bohm-hiley's pilot wave constructions)

I don't think it's a big "cheat" to suggest that the signal travels in non-classical space, not with the ridiculous plethora of calabi-yau manifolds and the like thrown up by string theory
 
Last edited:
  • #272
The test of entanglement speed (they call it "speed of quantum information") from the preprint Testing spooky action at a distance assumes a causal mechanism with a signal traveling in classical space, so I'm not sure if their measurement in the conclusion of a lower bound is even correct if we instead assume the signal travels in non-classical space (either other dimensions or something even weirder) (although there obviously is a lower bound, however it's the upper bound I'm interested in)

A simplistic model might be like this:

Code:
   ##      ##      ##      ##      ##
  #  #    #  #    #  #    #  #    #  #
 #    #  #    #  #    #  #    #  #    # 
#      ##      ##      ##      ##

 ------>  particle traveling at speed <= c in classical spacetime

    #
   # #    signal travels around extra dimensional
  #   #   ("kaluza-klein") coils at speed c^k

For "circular" coils we'd require the signal to travel at ~c^2 to keep up with the particle, but we can expect even faster speeds for efficient "communication" between the particles.

Now, it may be that these "spooky" signals can only travel "near" a path traced out by particles in classical space-time, and they should influence events restricted to those observed in entanglement (so the signals are responsible for the "magic" entanglement correlations we see, which can happen faster than the speed of light would allow with a classical signal)

In particular, there is no mechanism for transmitting FTL information through classical space (the entangled particles communicate with each other FTL but this only achieves entanglement correlations of their quantum properties, which we can't deterministically influence) I can see how this may be difficult to check for in current Bell Test experiments since you need to fine-tune an experiment to measure what may be a very tiny delay between the entangled particles switching quantum states.

So perhaps the better way forward would be to detect delays in multi-particle entanglements such as sufficiently complex quantum computers (what are we up to so far ~100 bits yet?)

On a side note, I wonder why Bohm, Hiley et al didn't consider a signal traveling in non-classical space, perhaps physics models based on exotic topology weren't the vogue and they were scared of additional ridicule?

And on the speculative subject of entanglement & consciousness I might also note that our brains don't seem to allow us to think at infinite speed, there seems a (fairly cumbersome) delay involved, but of course that may be due to the requirement for complex chemical and biological mechanisms relating to memory and the like (In fact there is a well known 40hz effect observed in human brains, eg see this paper). It's interesting that autistic savants and very young children seem able to process certain information quicker than normal adults, that may be due to their brains lacking certain biological mechanisms and hence they are not slowed down so much.

I would think the determination of an upper bound on entanglement speed would be at least as important as finding the Higgs Boson, so it's surprising that there doesn't seem much experimental effort in this direction.
 
Last edited:
  • #273
DrChinese said:
Where in the paper does it say ANYTHING remotely similar to the idea that the Product State statistics are approached?
You where talking about evidence not reference. You said: "That isn't so. There is absolutely no evidence (cite it if you think I am wrong) ..."
Besides this experiment wasn't about violation of BI so the calibration of entangled state was only part of preparations for main experiment.

DrChinese said:
By way of example: at 0 degrees, the Product State is 25.0% and the stated observation was apparently 1.9%. Does not seem too close. At 45 degrees, the Product State value should be 50.0% and the actual was apparently 42.6%.
visibility is defined as follows:
V=(max-min)/(max+min)
In case of visibility for minimum we can use this formula (but it's the same as above):
V=100%-2*min/(max+min)

Taking into account these formulas visibility for product state at +45deg/-45deg (minimum) would be 0 for 100% efficiency. Maximum in this case is coincidence rate at +45deg/+45deg.
Visibility for product state at H/V (minimum) would be 1 (maximum is coincidence rate at H/H).
So for product state with diminishing interference term as efficiency approaches 100% H/V visibility should stay the same but visibility for +45deg/-45deg should tend to 0 with increasing efficiency.

QM prediction used by Bell was that theoretical visibility for 100% efficiency is 1 for any angle α and β=α+Pi/2.

So according to QM visibility should be the same for +45deg/-45deg and H/V (of course not exactly 1 when considering effect of noise).
This is not so in this experiment by quite significant amount.

DrChinese said:
Don't you think the authors would be raising flags if the stats deviated from QM predictions by a significant amount?
No, I don't think so. Because QM says that "decoherence happens". Another thing is that it might be assumed that interference term can be diminished due to some imperfections of setup that you somehow can't track down. And you generally do not rise flags because of some unknown imperfection or noise or whatever (that as you would think you are too lame to track down). You try to eliminate it and if you can't you just ignore it (maximum include it in some statistical error estimates) hoping that it will not affect your experiment too much.

DrChinese said:
By the way, the 25% detection stat is a bit deceiving. That is because the value is net. Net meaning for both detectors jointly. Obviously, there are a lot of unmatched hits too. I would estimate the gross efficiency at close to 50% (since 50%^2 = 25%).
No, it is not. There is no reason to square efficiency if you have identical rate in both arms.
Let's say we have 25% efficiency in first arm - 80k/s rate, but 100% efficiency in second arm - 320k/s rate that we divide into two parts 80k/s and 240k/s.
Now we have probability that 1 of each 4 clicks from first arm will make coincidence with that 80k/s part from second arm and 3 of each 4 clicks will make coincidence with that 240k/s part. Together of course every click from first arm makes coincidence with click at second arm (as it has 100% efficiency).
 
  • #274
unusualname said:
For 3 or more particle entanglements (eg quantum computer) you wouldn't have such a privileged reference frame such that all entanglement events were simultaneous.

But, any two are simultaneous in SOME frame and their temporal order switches in other frames, so how do you argue for an unambiguous causal ordering without resorting to a preferred frame?
unusualname said:
I need to read that preprint linked to above carefully to understand the importance of the privileged reference frame they mention (apparently it's crucial to bohm-hiley's pilot wave constructions)

I don't think it's a big "cheat" to suggest that the signal travels in non-classical space, not with the ridiculous plethora of calabi-yau manifolds and the like thrown up by string theory

My understanding of BM is the wave function is updated in configuration space, so configuration space is "real" for them. But, I don't see how the precise mechanism avoids FTL comm or a preferred frame for an unambiguous causal ordering. The events (measurement outcomes) occur in spacetime and their temporal order is relative if they're space-like related, regardless of what is going on "behind the scenes." Therefore, if you want to explain them via "causal relations" you either need a preferred frame or FTL comm. [There is another way out -- you can say the future has causal influence on the past, but that's another discussion.]
 
  • #275
RUTA said:
But, any two are simultaneous in SOME frame and their temporal order switches in other frames, so how do you argue for an unambiguous causal ordering without resorting to a preferred frame?

You argue that a classical observer can't observe the sequence of causal events being caused (by the non-classical FTL signal), they only observe the final states.

The fact that an observer might see simultaneous and backward events isn't really a problem, except that it might confuse the observer :) (and remember I'm arguing that the entanglement events can't transmit classical information FTL)

Incidentally, I stated above that we couldn't deterministically influence the quantum state, I think that's true unless, er, you actually are the particle! ie consciousness allows us to choose certain quantum states, which then propagate to macroscopic events.

So in my (very speculative) theory, if we had a brain the size of a galaxy we probably could communicate FTL information within it provided the "information" was restricted to thoughts.

Ignoring the wild speculations, for a significant breakthrough in quantum theory a reasonably simple result is required on an upper bound for "the speed of quantum information"

That would be paradigm changing :)
 
Last edited:
  • #276
ah, hold your horses, this has been theoretically postulated:

fundamental quantum limit on the rate of operation of any information-processing system

In a paper published in the journal Physical Review Letters, Levitin and Toffoli present an equation for the minimum sliver of time it takes for an elementary quantum operation to occur. This establishes the speed limit for all possible computers. Using their equation, Levitin and Toffoli calculated that, for every unit of energy, a perfect quantum computer spits out ten quadrillion more operations each second than today's fastest processors.

http://arxiv.org/abs/0905.3417
 
Last edited:
  • #277
so given the above result (which is basically derived from the energy-time uncertainty relationship, there are other papers which discuss this, eg see Ultimate physical limits to computation) we already have an upper bound for the speed of quantum computers, hence it would be difficult to distinguish the time taken for the entanglement effects to propagate as opposed to this limit on qubit switching time.

So we'd need to devise our experiment more carefully to distinguish the time taken for propagation of entanglement events.

If we can assume that a classical space distance is quantitatively related to the length of the non-classical path followed by FTL entanglement signals (so if we increase the usual classical space between our entangled pairs we can assume the non-classical path I'm hypothesising also increases) then we could demonstrate that the "speed of entanglement" is finite by carrying out identical experiments at different distances and recording an increase in the time for the entangled pairs to correlate.

Admittedly, this is looking trickier and trickier...
 
  • #278
unusualname said:
You argue that a classical observer can't observe the sequence of causal events being caused (by the non-classical FTL signal), they only observe the final states.

They can observe the sequence. The problem is other people will observe different sequences.

unusualname said:
The fact that an observer might see simultaneous and backward events isn't really a problem, except that it might confuse the observer :) (and remember I'm arguing that the entanglement events can't transmit classical information FTL)

They will be confused if (1) the events are space-like separated, (2) they believe there is a causal connection b/w the events, (3) there is no preferrred frame, (4) the future doesn't causally influence the past. They will be confused b/c this a self-inconsistent set of assumptions.
 
  • #279
RUTA said:
They can observe the sequence. The problem is other people will observe different sequences.

No they can't, they can only observe the final quantum states, which had a causal sequence determined by the journey taken by the FTL non-classical signal that's responsible for the entanglement correlations. The observer will just see the final (classically observable) quantum states pop up in some arbitrary order determined by their classical reference frame.

It doesn't matter that other observers might claim the states appeared in a different order, no rule of Special Relativity is broken if no classical information was transmitted.
They will be confused if (1) the events are space-like separated, (2) they believe there is a causal connection b/w the events, (3) there is no preferrred frame, (4) the future doesn't causally influence the past. They will be confused b/c this a self-inconsistent set of assumptions.

Well QM is confusing :) They might believe all they like there is a causal connection between the events, but there's no classical causal connection between them, that's for sure.

I don't believe in fuzzy (or philosophically devious) interpretations of entanglement which assumes some magic instantaneous effect, I'd rather be more scientific and accept that our reality has some additional (but mathematically constructible) components which when taken into consideration give us a way to construct new physics models which append to the old and don't contradict long proven observations of theories like SR
 
  • #280
unusualname said:
... For 3 or more particle entanglements (eg quantum computer) you wouldn't have such a privileged reference frame such that all entanglement events were simultaneous.


Found one multiqubit (four-qubit) entangled experiment by Zeilinger et al. that cannot be described by local realism:
http://homepage.univie.ac.at/philip.walther/paper/ClusterBelll_PRL05_95_020403.pdf"
Cluster states are a new type of multiqubit entangled states with entanglement properties exceptionally well suited for quantum computation. In the present work, we experimentally demonstrate that correlations in a four-qubit linear cluster state cannot be described by local realism. This exploration is based on a recently derived Bell-type inequality [V. Scarani et al., Phys. Rev. A 71, 042325 (2005)] which is tailored, by using a combination of three- and four-particle correlations, to be maximally violated by cluster states but not violated at all by GHZ states. We observe a cluster-state Bell parameter of 2.59 ± 0.08, which is more than 7 σ larger than the threshold of 2 imposed by local realism.

mljgnd.png
 
Last edited by a moderator:

Similar threads

  • Quantum Physics
2
Replies
39
Views
426
  • Quantum Physics
Replies
4
Views
842
Replies
20
Views
1K
  • Quantum Physics
Replies
2
Views
1K
  • Quantum Physics
Replies
6
Views
1K
  • Quantum Physics
Replies
18
Views
2K
Replies
3
Views
1K
  • Quantum Physics
3
Replies
100
Views
9K
Replies
6
Views
3K
Replies
3
Views
676
Back
Top