New quantum experiments and its implications

Click For Summary
Recent discussions on quantum experiments focus on the implications of entanglement swapping, particularly the concept of backward in time communication. The process involves creating pairs of entangled photons, measuring some to establish correlations, and then using a third party's (Victor's) choice to determine the entanglement status of the remaining photons. Critics argue that while it may seem possible to communicate backward in time, the randomness of correlations and the necessity of classical communication to filter entangled pairs prevent this. Ultimately, the consensus is that despite intriguing theoretical possibilities, practical backward communication remains impossible due to the inherent nature of quantum mechanics.
  • #31
San K said:
SPDC is stimulated by random vacuum fluctuations, and hence the photon pairs are created at random times. The conversion efficiency is very low, on the order of 1 pair per every 10^12 incoming photons.

Thanks! Wasn't really sure what the order of magnitude was.
 
Physics news on Phys.org
  • #32
al onestone said:
You're quote "I am still convinced that there is a hidden variable theorem for all of quantum mechanics." tells me you need a little work on interpretational QM. I know that Bell's theorem and the violation of the CHSH inequalities does not convince a lot of people about quantum nonlocality, but I would advise you that seeking out a hidden variables type explanation is not the right direction. If I could make two suggestions;

1>Read Ballentine's chapter on Bell's theorem and that will get you past any hidden variables type theory

if you've already done that then move on to the real single world interpretation of QM

2>Read Zeilinger's 1999 paper "A foundational principle for QM"(J. Found. Phys.). Believe me, as a believer in physical realism, there is no better explanation of the laws of QM then the information interpretation written by Zeilinger (and von Weisacker, Wheeler, etc)

I don't know much about the article you posted but I know that some other threads have some info on the matter (www.sciforums.com "retrocausality in action")

My explanation is that there is nothing strange or retrocausal happening in this experiment. The results of Alice and Bob's measurements can be later considerred "entangled" regardless of the outcome of their measurements at the time. It only requires that when Victor makes his measurement ( Bell state measurement) that the system (which knows the outcome of Alice and Bob's measurements already because it is the system after all) simply projects onto the appropriate symmetry state to make it seem like the results of Alice and Bob's measurements were entangled already, which they were not.

This is interesting to weak measurement theorists though, because it seems that Alice and Bob are priorly post-selecting Bob's measurement.

Thanks for your suggestions. I haven't read those articles yet, but at least the wikipedia article of CHSH fails to convince me as well. More specifically, this quote in wikipedia:
Note that in all actual Bell test experiments it is assumed that the source stays essentially constant, being characterised at any given instant by a state ("hidden variable") λ that has a constant distribution ρ(λ) and is unaffected by the choice of detector setting.

I fail to see why one would make the assumption that ρ(λ) has a constant distribution. It seems to me that it leaves a massive gap where hidden variable theorems may still exist, or is that mere ignorance on my side?
 
  • #33
gespex said:
I fail to see why one would make the assumption that ρ(λ) has a constant distribution. It seems to me that it leaves a massive gap where hidden variable theorems may still exist, or is that mere ignorance on my side?

We expect random results. If the distribution was unbalanced, we would notice that in experiments pretty quickly. I.e. there are more H> than V> when we measure at 30 degrees. But even if the source does have some asymmetry in the hidden variables themselves, we can accept that too as long as there is a constant expectation value.
 
  • #34
zonde said:
I would like to find that out too. For me derivation in paper (pg 14) is too short to follow it so I won't try to offer derivation for HV and VH separable states.

Hi Stevie,

I will write the state evolutions down and send you some scans. I hope I can do that by Wednesday.

Cheers,

Johannes

Will post the scan when they're sent through.
 
  • #35
StevieTNZ said:
Will post the scan when they're sent through.
Will wait for it.
 
  • #36
Do we expect to see photons #1 and #4 both in VV with photons #2 and #3 in HH (and vice versa)? And not #1 and #4 in HH, and #2 and #3 also in HH?
 
  • #37
Received a PDF from Johannes Kofler re: the evolution of the separable state. But this is without the EOW's etc in action (which should produce the same result).

However if we have definite states prior to the first BS, even though after they're in superposition of traveling both paths, they reach the plates and are converted into L and R polarisation. When they reach the PBS's after the 2nd BS, wouldn't they have 1/2 probability of taking on V and H? *shrugs*
 

Attachments

  • #38
Okay - so I don't understand a few things:

1) We start off with two pairs of entangled photons, #1 and #2, and #3 and #4 as |H>|V> - |V>|H>.
2) How do we get from that to describing #2 and #3 (before the 1st beam splitter) as |H>|H> + |V>|V> and |H>|H> - |V>|V>, when no entanglement swapping hasn't occurred yet?

If we sent #2 and #3 through the interferometry, and they were definite H or V polarised, would we even end up with the results obtained? Shouldn't we describe #2 and #3 before the 1st BS as if they're still entangled originally, and calculate the result from that? But even then we wouldn't end up with |H>|V>(b") or |H>|V>(c") or |H>(b")|H>(c") or |V>(b")|V>(c").
 
  • #39
DrChinese said:
Thanks! Wasn't really sure what the order of magnitude was.

post edit -this should have referenced San K post # 19. My apologies, Dr. Chinese.
Whereas the order of magnitude seems immense, all but these scant few down converted photons pass 'straight' through the mechanism while the 'entangled' down converted photons travel in a cone away from this straight line. Each pair of down converted photons split 180 degrees from each other. These are the entangled pairs of photons which are used in experiments. There is 'nearly' no messy noise that can enter an experiment in this process. IMO.


mathal
 
Last edited:

Similar threads

Replies
24
Views
4K
Replies
58
Views
4K
Replies
12
Views
1K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
1
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 25 ·
Replies
25
Views
3K
  • · Replies 11 ·
Replies
11
Views
2K