Can grandpa understand the Bell's Theorem?

Click For Summary
The discussion centers on the challenges of understanding Bell's Theorem, particularly from the perspective of someone with a limited mathematical background. The theorem illustrates the discrepancies between quantum mechanics and classical physics, especially regarding correlations observed in entangled particles. Key points include the unexpected correlation results predicted by quantum mechanics, which differ from classical expectations, and the implications of these results for our understanding of measurement and communication at a distance. The conversation also touches on the need for clearer explanations of these complex concepts in physical terms, rather than relying solely on mathematical formalism. Ultimately, the discussion highlights the ongoing struggle to reconcile intuitive understanding with the counterintuitive nature of quantum phenomena.
  • #241
miosim said:
Einstein didn’t know that his concept could be transformed into a circus.

According to EPR argument the two correlated particles are represented by the two different and independent wave functions. When the first wave function collapses it reviled one complemented parameter (+spin) that gaves us a knowledge about another complemented parameter (-spin) of the second wave function. Because this wave functions has no description of this parameter the wave function and QM accordingly is incomplete.

Now let see the Bell’s ‘reasonable’ reproduction of this EPR model:

“…Let us illustrate the possibility of what Einstein had in mind in the context of the particular quantum mechanical predictions already cited for the EPRB gedanken experiment. These predictions make it hard to believe in the completeness of quantum formalism…”
Then Bell ‘mumbles’ the following:
“…But of course outside that formalism they make no difficulty whatever for the notion of local causality. To show this explicitly we exhibit a trivial ad hoc space-time picture of what might go on. It is a modification of the naive classical picture already described. Certainly something must be modified in that, to reproduce the quantum phenomena. Previously, we implicitly assumed for the net force in the direction of the field gradient (which we always take to be in the same direction as the field) a form: F cos Q ….”

This is it. These are all efforts to recreate the EPR model in spirit of Einstein.
Another totally-confident-yet-totally-ignorant argument from miosim (there is a psychological explanation for this sort of thing). The second "mumbled" statement has nothing to do with how Bell ultimately defines "local causality", it's just meant as a "trivial" and "ad hoc" model that he starts out with as an example, then shows it doesn't work and abandons it. His actual proof of the theorem that local causality is incompatible with QM has nothing whatsoever to do with that model. But I [post=3257023]already told you this before[/post]:
JesseM said:
Yes, he starts by assuming a specific "naive classical model" with a modified force law given by equation (2), but if you read further in the paper he later makes the argument more general and considers what would have to be true in all possible models respecting the "local causality" (same as local realism) he mentions above. Note he immediately shows on p. C2-49 that this naive model fails to match up with QM "at intermediate angles", and then goes on to say:

"Of course this trivial model was just the first one we thought of, and it worked up to a point. Could we not be a little more clever, and device a model which reproduces the quantum formulae completely? No. It cannot be done, so long as action at a distance is excluded."

So he's saying all locally causal models which exclude action-at-a-distance will fail to match up with QM, not just the "trivial model" he brought up briefly on p. c2-48. To explain why this is true, he first starts with the analogy of Bertlmann's socks, which is intended to illustrate how one can derive an inequality based on the idea that if pairs of entangled particles (or pairs of socks) always given identical results when subjected to the same test, that must be because each member of the pair had a set of properties (assigned to them by the source when they were created at a common location) that gave them the same set of predetermined results for each possible test. In a "locally causal" universe this is the only way to explain how you always see perfect correlations whenever experimenters choose the same test, as he explains on c2-52:

"Let us summarize once again the logic that leads to the impasse. The EPRB correlations are such that the result of the experiment on one side immediately foretells that on the other, whenever the analyzers happen to be parallel. If we do not accept the intervention on one side as a causal influence on the other, we seem obliged to admit that the results on both sides are determined in advance anyway, independently of the intervention on the other side, by signals from the source and by the local magnet set."
miosim said:
Bell (and his supporters) just forgets that the EPR particles are represented by the two independent wave functions and therefore their cos^2 behavior are identical to Bell’s QM model.
They're not represented by "two independent wave functions" in QM, they're represented by a single wavefunction representing the entangled two-particle system. Bell is proving that no local theory can reproduce the QM prediction which is based on this single (nonlocal) wavefunction.
miosim said:
Secondly, if Bell decided to model EPR particles as classical ones, he must at least include interactions of these particles with polarizers (QM formalism has this interactions builtin) as follows: the polarizers, like optical ‘funnel’, modifies polarization of both photons in the direction of higher correlation and this way eliminating inequality with the QM prediction.
Bell's definition of local causality makes no specific assumptions about how the particles interact with the polarizers, but the definition is broad enough to include the possibility that the polarizers would modify polarization in a local way. Again, Bell's definition is exactly equivalent to my 1) and 2) (again see the links I gave at the end of [post=3278882]this post[/post]), and my two assumptions certainly don't rule out the possibility that the particles are modified by their interactions with the polarizers. If you want to engage Bell's argument, you need to try to think about these basic assumptions, not some strawman based on your lack of reading comprehension. You said you found my 1) and 2) too "technical", but I'd be happy to elaborate on any sentences or terms you found confusing if you want to make an effort to understand them, rather than just taking the intellectually lazy route of saying "too hard!" and going back to repeating the same old ignorant arguments and strawman, ignoring all refutations like a good http://redwing.hutman.net/~mreed/warriorshtm/ferouscranus.htm .
 
Last edited by a moderator:
Physics news on Phys.org
  • #242
SpectraCat said:
If that is true, then how is it possible that two crossed polarizers can block 100% of the light?

You are correct. The example was built to show the 85/15 split at those angles are possible in a classical model and is easier to calculate. Consider if the edges are rounded out a bit:

A particle modeled with the properties of a bloch sphere as shown http://en.wikipedia.org/wiki/Bloch_sphere" . This is represented below by the density of the red area on Bob and Alice measuring devices overlaid with green values. The density of red represents the probability of measuring the photon if it is presented at that angle and is a property of the photon (cos^2).

This picture shows that Bob and Alice measuring a sequence of "up" paricles. It shows that for particles that reach the detector, Bob and Alice always measure the same no matter the orientation of their measuring devices, as long as they both measure at the same angle:
clockcone_p1.jpg


If Bob tilts his measuring device by 45 degrees, he notices that the number of matching particles drops to 50%. If Bob tilts his device by 90 degrees, he does not see any matching particles of course since he doesn't see any particles. Experimental results shown http://arxiv.org/PS_cache/quant-ph/pdf/0205/0205171v1.pdf" figure 3:
clockcone_p2.jpg


Finally if Bob tilts his device 22.5 or 67.5 degrees he gets the 85% and 15% predicted by QM. Note these are the kind of results you would expect for http://arxiv.org/PS_cache/quant-ph/pdf/0205/0205171v1.pdf" , table 1:
clockcone_p3.jpg
 
Last edited by a moderator:
  • #243
miosim said:
Bell (and his supporters) just forgot that the EPR particles are represented by the two independent wave functions and therefore their cos^2 behavior are identical to Bell’s QM model.

Secondly, if Bell decided to model EPR particles as classical ones, he must at least include interactions of these particles with polarizers (QM formalism has this interactions builtin) as follows: the polarizers, like optical ‘funnel’, modifies polarization of both photons in the direction of higher correlation and this way eliminating inequality with the QM prediction.

It seems to me that the Bell’s theorem is dead.

You seem to be missing the point by increasing amounts on each attempt!

The cos^2 behaviour of two independent particles leads to only half of the correlation values predicted by QM and confirmed by experiment.

Bell's theorem is NOT based on ANY classical model; such models are only used as examples to illustrate the theory.

Bell's theorem simply points out that a triangle inequality applies to differences between sets of results in any local realistic theory, but QM violates that inequality.
 
  • #244
edguy99 said:
You are correct. The example was built to show the 85/15 split at those angles are possible in a classical model and is easier to calculate. Consider if the edges are rounded out a bit:

A particle modeled with the properties of a bloch sphere as shown http://en.wikipedia.org/wiki/Bloch_sphere" . This is represented below by the density of the red area on Bob and Alice measuring devices overlaid with green values. The density of red represents the probability of measuring the photon if it is presented at that angle and is a property of the photon (cos^2).

This picture shows that Bob and Alice measuring a sequence of "up" paricles. It shows that for particles that reach the detector, Bob and Alice always measure the same no matter the orientation of their measuring devices, as long as they both measure at the same angle:If Bob tilts his measuring device by 45 degrees, he notices that the number of matching particles drops to 50%. If Bob tilts his device by 90 degrees, he does not see any matching particles of course since he doesn't see any particles. Experimental results shown http://arxiv.org/PS_cache/quant-ph/pdf/0205/0205171v1.pdf" figure 3:Finally if Bob tilts his device 22.5 or 67.5 degrees he gets the 85% and 15% predicted by QM. Note these are the kind of results you would expect for http://arxiv.org/PS_cache/quant-ph/pdf/0205/0205171v1.pdf" , table 1:

Ok .. I am a little baffled .. that seems like just the basic Malus' law description for correlations between polarization measurements of unentangled photon pairs. What does any of that have to do with correlations between measurements on polarization-entangled photon pairs, which is what was studied in the experiment you cited?
 
Last edited by a moderator:
  • #245
edguy99 said:
You are correct. The example was built to show the 85/15 split at those angles are possible in a classical model and is easier to calculate. Consider if the edges are rounded out a bit:

A particle modeled with the properties of a bloch sphere as shown http://en.wikipedia.org/wiki/Bloch_sphere" . This is represented below by the density of the red area on Bob and Alice measuring devices overlaid with green values. The density of red represents the probability of measuring the photon if it is presented at that angle and is a property of the photon (cos^2).

...

You know: suppose a cat is a flying dog.

FlyingDog.jpg


Virtually everything you have here is either wrong or makes no sense at all. Entangled particles of known spin (yes, these exist) do NOT behave statistically as you describe in your pictures. And the descriptions you provide don't demonstrate realism.

(Photons are spin 1, by the way.)
 
Last edited by a moderator:
  • #246
Jonathan Scott said:
Bell's theorem simply points out that a triangle inequality applies to differences between sets of results in any local realistic theory, but QM violates that inequality.

Thanks, finally some sanity.
 
  • #247
Jonathan Scott said:
You seem to be missing the point by increasing amounts on each attempt!

The cos^2 behaviour of two independent particles leads to only half of the correlation values predicted by QM and confirmed by experiment.

Bell's theorem is NOT based on ANY classical model; such models are only used as examples to illustrate the theory.

Bell's theorem simply points out that a triangle inequality applies to differences between sets of results in any local realistic theory, but QM violates that inequality.

~I gather that Bell's theorem is "sufficient" to prove that Quantum Mechanics violates locality, or something like that... But is it really necessary? I'm arguing from some ignorance, because I can't recall Bell's theorem, but it seems like, when I did see it's derivation, some years ago, it was a matter of formal logic; having nothing to do with experiment whatsoever. At the time, I had no doubt that Bell's theorem was true. (That's the nature of a theorem.) If I recall correctly it was a fairly simple derivation that could be explained in 15 minutes or so on a chalk board. In the same lecture though, the results of a quantum mechanics experiment was described--just the results, mind you, not the experiment itself. The most difficult part was to see how it was that they were able to abstract the results of the experiment down to something to which one could apply Bell's Theorem; or why one would bother.

~The attached graph (below: labels added) from http://arxiv.org/PS_cache/quant-ph/pdf/0205/0205171v1.pdf seems to get at the issue. The experiment is not quite as perfect as I would like, because it uses two polarizers instead of two birefringent crystals. However, it seems to me, that what would happen if you used two birefringent crystals is instead of doing four runs, you would just have to do two, and you would get the \alpha =0^o and the \alpha =90^o plots simultaneously. Then you would get the \alpha =45^o and the \alpha =135^o plots simultaneously.

The way the experiment is set up, by changing alpha, you affect the chance of detection at the other polarizer. If the experiment were set up with crystals, you would NOT affect the chance of detection, but the chance of how it were lined up.

By itself, this is weird enough that I'd say you have some kind of action at a distance. A sort of non-local wave collapse. You don't have to bring up anything called "Bell's Theorem" unless you want to show me a formal proof of something that you've already convinced me of. In fact, I'm not really entirely surprised that there is something strange going on, because interference effects, (two-slit experiment, diffraction, etc) already exhibit a possibly related wave-collapse phenomenon.

But now we should also bring up the exciting aspect of the experiment. When I receive a photon through one receiver or another, Can I use this as some form of faster-than-light communication? Let's set it up with birefringent crystals at both ends instead of the polarizers so we receive 100% of the entangled photons instead of at most 50%.

First question is, can we guarantee that almost every photon coming in is from an entangled pair, and every entangled pair is going through both receivers. IF SO, then I would say, yes, you could look at the photon count and based on whether your photon count were 300/0 150/150 or 0/300, you could figure out what angle the other crystal was set at.

In practice, of course, arranging the power source, and two receivers thosands, millions or billions of miles apart for 100% mutual detection would be... difficult.
 

Attachments

  • two polarizer result.png
    two polarizer result.png
    19.3 KB · Views: 405
  • #248
JDoolin said:
But now we should also bring up the exciting aspect of the experiment. When I receive a photon through one receiver or another, Can I use this as some form of faster-than-light communication? Let's set it up with birefringent crystals at both ends instead of the polarizers so we receive 100% of the entangled photons instead of at most 50%.

First question is, can we guarantee that almost every photon coming in is from an entangled pair, and every entangled pair is going through both receivers. IF SO, then I would say, yes, you could look at the photon count and based on whether your photon count were 300/0 150/150 or 0/300, you could figure out what angle the other crystal was set at.

In practice, of course, arranging the power source, and two receivers thosands, millions or billions of miles apart for 100% mutual detection would be... difficult.
No, you can't use it for FTL communication. The simplest explanation as to why is that, for any given measurement at one end of the channel (call it your end), you cannot know a priori whether or not there has already been a measurement at the other end of the channel that determined the result at your end. In other words, if you set your polarizer at 45 degrees and detect a photon, does that mean a measurement at the other end of the channel was done "first" at (for example) 135 degrees, determining the result at your end? Or does it mean that your measurement was done "first", determining the result of your partners "future" measurement at the other end of the channel?

Note that "first" and "future" are in quotes because statements about the relative orders of events in reference frames with a space-like separation need to be carefully qualified, and we have not done that here.
 
Last edited:
  • #249
SpectraCat said:
Ok .. I am a little baffled .. that seems like just the basic Malus' law description for correlations between polarization measurements of unentangled photon pairs. What does any of that have to do with correlations between measurements on polarization-entangled photon pairs, which is what was studied in the experiment you cited?

This is "just the basic Malus' law description" with one important difference. When Bob is at 22 degrees, he only has an 85% chance of measuring a vertical photon hence the drop in "coordinated hits" between Bob and Alice (he simply sees it or not). The photons are not somehow reduced in intensity by aligning their electrical vector to the measuring field.

The hidden variable theory proposed in http://arxiv.org/PS_cache/quant-ph/pdf/0205/0205171v1.pdf" just below figure 4 results in the straight line as shown in figure 4. Assuming that Bob (beta in the experiment) has an 85% chance of measuring a photon when at 22 degrees preserves the curved line in figure 4 and the coordinated hits measured by Bob and Alice, ie. if Bob does not measure the photon, you don't have a coordinated hit and Alices measurements of coordinated hits must have also dropped "instantly" even though she did not do anything.
 
Last edited by a moderator:
  • #250
DrChinese said:
You know: suppose a cat is a flying dog.

FlyingDog.jpg


Virtually everything you have here is either wrong or makes no sense at all. Entangled particles of known spin (yes, these exist) do NOT behave statistically as you describe in your pictures. And the descriptions you provide don't demonstrate realism.

(Photons are spin 1, by the way.)

Hey, where did you get a picture of my dog? The photons in the experiment start out linear polarized in a specific direction so are generally talked about as up or down in this type of experiment, hence the reference.
 
  • #251
JDoolin said:
...The experiment is not quite as perfect as I would like, because it uses two polarizers instead of two birefringent crystals.

This has been mentioned a couple of times now. Do you know of a good experiment using this setup (birefringent crystals) with a link that we could discuss and eliminate the polarizers?
 
  • #252
edguy99 said:
This is "just the basic Malus' law description" with one important difference. When Bob is at 22 degrees, he only has an 85% chance of measuring a vertical photon hence the drop in "coordinated hits" between Bob and Alice (he simply sees it or not). The photons are not somehow reduced in intensity by aligning their electrical vector to the measuring field.

That is the quantum phrasing of Malus' law .. there is no significant difference between the results or the interpretation.

The hidden variable theory proposed in http://arxiv.org/PS_cache/quant-ph/pdf/0205/0205171v1.pdf" just below figure 4 results in the straight line as shown in figure 4. Assuming that Bob (beta in the experiment) has an 85% chance of measuring a photon when at 22 degrees preserves the curved line in figure 4 and the coordinated hits measured by Bob and Alice, ie. if Bob does not measure the photon, you don't have a coordinated hit and Alices measurements of coordinated hits must have also dropped "instantly" even though she did not do anything.

Detection probabilities are taken into account in the development of the equations for both the actual experiment, and the CHSH inequality used in that paper. A correlation count can only be established by comparison of the two sets of results, so the concept of an "instant drop" in the correlation count is ill defined.
 
Last edited by a moderator:
  • #253
JDoolin said:
1. I gather that Bell's theorem is "sufficient" to prove that Quantum Mechanics violates locality, or something like that... But is it really necessary? I'm arguing from some ignorance, because I can't recall Bell's theorem, but it seems like, when I did see it's derivation, some years ago, it was a matter of formal logic; having nothing to do with experiment whatsoever. At the time, I had no doubt that Bell's theorem was true. (That's the nature of a theorem.) If I recall correctly it was a fairly simple derivation that could be explained in 15 minutes or so on a chalk board. In the same lecture though, the results of a quantum mechanics experiment was described--just the results, mind you, not the experiment itself. The most difficult part was to see how it was that they were able to abstract the results of the experiment down to something to which one could apply Bell's Theorem; or why one would bother.

By itself, this is weird enough that I'd say you have some kind of action at a distance. A sort of non-local wave collapse. You don't have to bring up anything called "Bell's Theorem" unless you want to show me a formal proof of something that you've already convinced me of. In fact, I'm not really entirely surprised that there is something strange going on, because interference effects, (two-slit experiment, diffraction, etc) already exhibit a possibly related wave-collapse phenomenon.

2. But now we should also bring up the exciting aspect of the experiment. When I receive a photon through one receiver or another, Can I use this as some form of faster-than-light communication? ...

1. To understand why Bell is needed, let's return to the original EPR situation in which we imagine there is a more complete specification of the system possible. For example, perhaps there are hundreds of hidden elements which lead us to see the so-called perfect correlations envisioned by EPR - and you would need a lot to get these correlations. Now, you may consider this implausible, but it does show why we need Bell.

2. What is being graphed in your attached example is P(a+b), which is the coincidence rate. Nothing changes visibly on either side when looking at that side alone. So no signaling is possible.

Other than that, I pretty well agree with you.
 
  • #254
edguy99 said:
Hey, where did you get a picture of my dog? The photons in the experiment start out linear polarized in a specific direction so are generally talked about as up or down in this type of experiment, hence the reference.

That also looks like my dog when I open the door and look the other way for a second. :smile:

You cannot start out with knowledge of the polarization (say as up) and expect correlations which follow the cos^2 rule.

For example: Alice set at 22.5 degrees, Bob same, the match % with polarizers will be: 73%/2 (PBS would be twice that) with Type I non-polarization entangled pairs.

On the other hand, with Type I polarization entangled pairs, the match % would be 85%/2.

My point is that your basic premise itself (how polarization is observed) can be experimentally tested directly, and found to be incorrect. This is separate from attempting to model as a Bell inequality. It fails before you get that far.

In addition, the entire point of Bell is to demonstrate that realistic solutions are not possible. You have yet to demonstrate realism. That requires providing an answer to the value of counterfactual measurements. I.e. the values for 3 settings I pick across a group of photons. If you want me to explain the rules for that, I would be happy to. Then you will see the problem more clearly with your idea.
 
  • #255
edguy99 said:
This has been mentioned a couple of times now. Do you know of a good experiment using this setup (birefringent crystals) with a link that we could discuss and eliminate the polarizers?

http://arxiv.org/abs/quant-ph/9810080

Violation of Bell's inequality under strict Einstein locality conditions, Gregor Weihs, Thomas Jennewein, Christoph Simon, Harald Weinfurter, Anton Zeilinger

This is one of the primary references in scholarly articles. This is the top echelon of researchers.
 
  • #256
SpectraCat said:
No, you can't use it for FTL communication. The simplest explanation as to why is that, for any given measurement at one end of the channel (call it your end), you cannot know a priori whether or not there has already been a measurement at the other end of the channel that determined the result at your end. In other words, if you set your polarizer at 45 degrees and detect a photon, does that mean a measurement at the other end of the channel was done "first" at (for example) 135 degrees, determining the result at your end? Or does it mean that your measurement was done "first", determining the result of your partners "future" measurement at the other end of the channel?

Note that "first" and "future" are in quotes because statements about the relative orders of events in reference frames with a space-like separation need to be carefully qualified, and we have not done that here.

I wouldn't think it would matter whether it is measured first at the beta end or first at the alpha end. Cosine is an even function, so if you get Cos(a-b) or Cos(b-a) you would get the same result.

Besides which, as you mention, the two events are separated by a "space-like" interval; not a "time-like" interval, so effectively, in half of the reference frames a is before b, and in another half of the reference frames b is before a, and, of course, in some specifically defined reference frames, the two events are simultaneous.
 
  • #257
DrChinese said:
1. To understand why Bell is needed, let's return to the original EPR situation in which we imagine there is a more complete specification of the system possible. For example, perhaps there are hundreds of hidden elements which lead us to see the so-called perfect correlations envisioned by EPR - and you would need a lot to get these correlations. Now, you may consider this implausible, but it does show why we need Bell.

2. What is being graphed in your attached example is P(a+b), which is the coincidence rate. Nothing changes visibly on either side when looking at that side alone. So no signaling is possible.

Other than that, I pretty well agree with you.

The way I understood the graph is it shows the N(A and B) which is the number of events where both A and B were detected at the same time.

Now you seem to be saying that the overwhelming majority of photons detected at A and B are non-coincident, so that the slight change caused by this effect would be miniscule? I could see how it might be miniscule or perhaps statistically too small to measure, but I don't understand how it could be zero.
 
  • #258
JDoolin said:
I wouldn't think it would matter whether it is measured first at the beta end or first at the alpha end. Cosine is an even function, so if you get Cos(a-b) or Cos(b-a) you would get the same result.

What you seem to be missing is that in order to transmit information over such a channel, the person receiving the transmission must know both a and b. However, in order to transmit information, the person sending the information must be free to change one of those parameters. Furthermore, cos(a-b) defines the coincidence rate (or coincidence probability) ... in order to transmit information, you would have to know about specific coincident events at both ends of the channel. That obviously requires a comparison step, and thus a lightspeed (or slower) channel.

Just think about being at one end of such a channel with a space-like separation to the other end. What can you do? You can choose the angle of your polarizer (lets say b), and record photon detection events. Let's assume that the photons arrive at a known, constant rate of 1 per second. What do you see? Each second you check your detector to see if a photon was registered, detection events count as 1's, non-detection events as 0's. How can you extract information from that channel?

[EDIT: That last question was poorly phrased ... I meant to ask, how can you know that the information you are receiving is due to manipulations performed at the other end of the channel, rather than just random noise?]

Besides which, as you mention, the two events are separated by a "space-like" interval; not a "time-like" interval, so effectively, in half of the reference frames a is before b, and in another half of the reference frames b is before a, and, of course, in some specifically defined reference frames, the two events are simultaneous.

Yup.
 
Last edited:
  • #259
JDoolin said:
The way I understood the graph is it shows the N(A and B) which is the number of events where both A and B were detected at the same time.
You have it right. So how would you know N(A and B) unless both sides were in communication? And what method do you plan to use to get that information?

And just to be clear: the intensity on either detector never changes.
 
  • #260
edguy99 said:
Assuming that Bob (beta in the experiment) has an 85% chance of measuring a photon when at 22 degrees preserves the curved line in figure 4 and the coordinated hits measured by Bob and Alice, ie. if Bob does not measure the photon, you don't have a coordinated hit and Alices measurements of coordinated hits must have also dropped "instantly" even though she did not do anything.

Are you back to invisible photons? Those won't enter into any experimental statistics anywhere. Or?
 
  • #261
DrChinese said:
You have it right. So how would you know N(A and B) unless both sides were in communication? And what method do you plan to use to get that information?

And just to be clear: the intensity on either detector never changes.

This might be easier to resolve if I had information on exactly what the intensity (in photons per second) is actually received by either detector. I believe I have only been given the number of coincident events at around 300 every 10 seconds, but not the total intensity.

If we could assure ZERO non-coincident photons, then you'd have a number between 0 and 100% of max. If it is 90% non-coincident photons, then you'd have a signal between 90 and 100% of max. If it is made up of 99.99% non-coincident photons, then you'd get a signal between 99.99% and 100% of the maximum, and you might as well say "the intensity on either detector never changes, because the change would be statistically insignificant.
 
  • #262
JDoolin said:
This might be easier to resolve if I had information on exactly what the intensity (in photons per second) is actually received by either detector. I believe I have only been given the number of coincident events at around 300 every 10 seconds, but not the total intensity.

If we could assure ZERO non-coincident photons, then you'd have a number between 0 and 100% of max. If it is 90% non-coincident photons, then you'd have a signal between 90 and 100% of max. If it is made up of 99.99% non-coincident photons, then you'd get a signal between 99.99% and 100% of the maximum, and you might as well say "the intensity on either detector never changes, because the change would be statistically insignificant.

I will say it again: the intensity at either detector NEVER changes (beyond normal deviations). About 50% of the incident photons come through Alice's polarizer. This is true regardless of what Bob does. Or whether Bob does anything at all.

You can see the separate intensity for Alice and Bob in the experiment as N(A) and N(B). That looks to be about 85,000 per run IIRC.
 
  • #263
DrChinese said:
I will say it again: the intensity at either detector NEVER changes (beyond normal deviations). About 50% of the incident photons come through Alice's polarizer. This is true regardless of what Bob does. Or whether Bob does anything at all.

You can see the separate intensity for Alice and Bob in the experiment as N(A) and N(B). That looks to be about 85,000 per run IIRC.

Is that 85,000 photon events in each 10 second run? (Edit: Out of which 300 are "coincident" events?)
 
  • #264
I think so, although that seems high to me. I assume because this is an undergrad setup and the controls don't need to be too tight. If you look at the Weihs et al paper, they get a much higher rate of matches.
 
  • #265
DrChinese said:
I think so, although that seems high to me. I assume because this is an undergrad setup and the controls don't need to be too tight. If you look at the Weihs et al paper, they get a much higher rate of matches.

I guess the more relevant question is what's the standard deviation. Is it
85,000 +/- 1,000, or
85,000 +/- 100, or
85,000 +/- 1 event
per 10 seconds?

If the "noise" is constant enough, then you should be able to detect a change of 300 events per second. But if the noise regularly varies from 83,000 to 87,000, then a change of 300 might not be noticed.

If you're saying the intensity is constant, but the 300 counts per 10 seconds would be statistically insignificant in the measurment of the intensity anyway, I can make sense of that. It''s just the change can't be detected over the noise.

But if the intensity were EXACTLY the same, while the coincidence events (detection of entangled particles) went DOWN, that would mean there had to be some INCREASE in the number of non-coincidence (detection of non-entangled particles) events. Where would these extra non-coincidence events come from?
 
  • #266
JDoolin said:
I guess the more relevant question is what's the standard deviation. Is it
85,000 +/- 1,000, or
85,000 +/- 100, or
85,000 +/- 1 event
per 10 seconds?

If the "noise" is constant enough, then you should be able to detect a change of 300 events per second. But if the noise regularly varies from 83,000 to 87,000, then a change of 300 might not be noticed.

If you're saying the intensity is constant, but the 300 counts per 10 seconds would be statistically insignificant in the measurment of the intensity anyway, I can make sense of that. It''s just the change can't be detected over the noise.

But if the intensity were EXACTLY the same, while the coincidence events (detection of entangled particles) went DOWN, that would mean there had to be some INCREASE in the number of non-coincidence (detection of non-entangled particles) events. Where would these extra non-coincidence events come from?

It would be so much easier to forget the polarizer example and switch to the PBS example because clearly that is causing a degree of confusion. I hope you see that if there was a PBS, every photon would emerge as a + or a -. That intensity does NOT change for Alice regardless of anything Bob does. Just as importantly, the + intensity and the - intensity will be nearly equal, and that ratio will not change either.

Do you see why? In other words, you are trying to imagine an effect which does not exist. Many folks get confused about the absorption of photons by a polarizer and get lost in analyzing that. The effect to look for is the coincidence rate varying according to the cos^2 rule predicted by QM versus one of the other functions you get with a local realistic model. There is nothing that ever changes at Alice as a result of what happens at Bob EXCEPT as it appears which you count matches versus non-matches (or similar).

The reason that no one cares about photons which cannot be paired is that they don't fit the criteria of an entangled pair. We are interested only in creating pairs that fit this criteria and analyzing those. So if there were nothing but paired events - no noise - there would still be no change in intensity at Alice based on anything Bob does. And vice versa.
 
  • #267
DrChinese said:
It would be so much easier to forget the polarizer example and switch to the PBS example because clearly that is causing a degree of confusion. I hope you see that if there was a PBS, every photon would emerge as a + or a -. That intensity does NOT change for Alice regardless of anything Bob does. Just as importantly, the + intensity and the - intensity will be nearly equal, and that ratio will not change either.

Do you see why? In other words, you are trying to imagine an effect which does not exist. Many folks get confused about the absorption of photons by a polarizer and get lost in analyzing that. The effect to look for is the coincidence rate varying according to the cos^2 rule predicted by QM versus one of the other functions you get with a local realistic model. There is nothing that ever changes at Alice as a result of what happens at Bob EXCEPT as it appears which you count matches versus non-matches (or similar).

The reason that no one cares about photons which cannot be paired is that they don't fit the criteria of an entangled pair. We are interested only in creating pairs that fit this criteria and analyzing those. So if there were nothing but paired events - no noise - there would still be no change in intensity at Alice based on anything Bob does. And vice versa.

No, quite likely, I'll have to start over from scratch to understand. The only assumption I'm aware of making is that the total number of photons detected must equal the number of nonentangled photons + the number of entangled photons. But there may be any number of other things I'm overlooking.

By my reasoning, if you used a birefringent crystal which passed 100% of the incoming photons to one of the two polarizations, you would STILL have a small difference between the values.

For instance using the graph from the paper I used before, and the number 85,000 you gave me earlier, if you aligned alpha with beta, you would get
84125+300=85,125 hits on the aligned axis and 84125+50=84875 hits on the non-aligned axis. A 0.3% difference.

But if you had alpha and beta at a 45 degree angle from each other, you would get 84125+175=85,000 hits through both channels; a 0% difference.


(By the way, I'm not sure what PBS stands for.)
 
  • #268
DrChinese said:
I think so, although that seems high to me. I assume because this is an undergrad setup and the controls don't need to be too tight. If you look at the Weihs et al paper, they get a much higher rate of matches.

I see the coincidences in the 0 to 800 range for 5 second intervals, but I don't see any measure of the N(A) or N(B) for a 5 second interval?
 
  • #269
JDoolin said:
... the two events are separated by a "space-like" interval; not a "time-like" interval, so effectively, in half of the reference frames a is before b, and in another half of the reference frames b is before a, and, of course, in some specifically defined reference frames, the two events are simultaneous.

Great quote.
 
  • #270
JesseM said:
Another totally-confident-yet-totally-ignorant argument from miosim …
… The second "mumbled" statement has nothing to do with how Bell ultimately defines "local causality", it's just meant as a "trivial" and "ad hoc" model that he starts out with as an example, then shows it doesn't work and abandons it.
Did Bell abandon his distorted model/example of EPR? No, he diddn’t according to reference below:

http://www.scholarpedia.org/article/Bell's_theorem#S11a
“…The proof of Bell's theorem is obtained by combining the EPR argument (from locality and certain quantum predictions to pre-existing values) and Bell's inequality theorem…”

Apparently Bell didn’t abandon his model of the EPR argument and admitted that this is the best model he (or anybody else) can build:
"Of course this trivial model was just the first one we thought of, and it worked up to a point. Could we not be a little more clever, and device a model which reproduces the quantum formulae completely? No. It cannot be done, so long as action at a distance is excluded."

At the same time I am a bit confused about how Bell uses EPRB arguments. “…The EPRB correlations are such that the result of the experiment on one side immediately foretells that on the other, whenever the analyzers happen to be parallel…”.

I don’t know if "immediately foretells" is a part of the EPRB argument. I am not sure if this is another distortion of Einstein’s views or a distortion caused by Bohm. My interpretation of Einstein’s views is based on his EPR paper (1935). Therefore if I refer to EPR, I mean this specific paper.
miosim said:
Bell (and his supporters) just forgets that the EPR particles are represented by the two independent wave functions and therefore their cos^2 behavior are identical to Bell’s QM model.
JesseM said:
…They're not represented by "two independent wave functions" in QM, they're represented by a single wavefunction representing the entangled two-particle system. Bell is proving that no local theory can reproduce the QM prediction which is based on this single (nonlocal) wavefunction.
I am not talking about the QM interpretation, but about the EPR interpretation (in the EPR paper) of particles that are represented by the two independent wave functions. http://prola.aps.org/pdf/PR/v47/i10/p777_1. Did you read the original paper?
JesseM said:
… Again, Bell's definition is exactly equivalent to my 1) and 2) … If you want to engage Bell's argument, you need to try to think about these basic assumptions, not some strawman based on your lack of reading comprehension.
No I don’t want to engage Bell's arguments that are based on the profoundly distorted initial conditions.
JesseM said:
And just for your information, Bell wasn't in the least bit sympathetic to Copenhagen, he much preferred nonlocal hidden-variable theories which try to give an objective picture of what's really going on with quantum systems
By trying to provide an objective picture of what's really going on with quantum systems, Bell violated the “religious foundation” of QM built by Bohr and Heisenberg.
Bohr: "There is no quantum world. There is only an abstract quantum mechanical description. It is wrong to think that the task of physics is to find out how Nature. Physics concerns what we can say about Nature".

By violating this “foundation” Bell opened the "can with worms." He expended the scale of the wave function collapse and revealed its non-sense. This non-sense is called ‘non-local interactions’.

Jonathan Scott said:
You seem to be missing the point by increasing amounts on each attempt!
The cos^2 behaviour of two independent particles leads to only half of the correlation values predicted by QM and confirmed by experiment.
Aspect may disagree with you. From the Bell’s Theorem : The Naive View Of An Experimentalist
“… a straightforward application of Malus law shows that a subsequent measurement performed along b on photon ν 2 will lead to P (a, b) = cos^2(a,b) …”

The author of the paper below may also disagree with you:

http://bib.irb.hr/datoteka/287013.pavicic-prd90.pdf
“…The recognition of Scully and Milomni’s theory as a theory which makes the quantum Malus law work for composite systems was clue for the poof...”
“… In other words, although being structurally different, HV on and QM predict the same experimental outcome…”
Jonathan Scott said:
Bell's theorem is NOT based on ANY classical model; such models are only used as examples to illustrate the theory.
First, it is obviously a misleading illustration of EPR argument.
Second, as I understand, Bell’s inequalities were derived based on this specific illustration. I don't see any other model he used to compare with QM. Do you mean he compared QM with ‘locality’ in general? In this case please refer a formula to me of this "label of locality" and how it appears in Bell’s inequalities.
 

Similar threads

Replies
80
Views
7K
  • · Replies 6 ·
Replies
6
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 75 ·
3
Replies
75
Views
11K
  • · Replies 47 ·
2
Replies
47
Views
5K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 53 ·
2
Replies
53
Views
5K
  • · Replies 36 ·
2
Replies
36
Views
5K
  • · Replies 333 ·
12
Replies
333
Views
18K
  • · Replies 4 ·
Replies
4
Views
2K