Can grandpa understand the Bell's Theorem?

  • #251
JDoolin said:
...The experiment is not quite as perfect as I would like, because it uses two polarizers instead of two birefringent crystals.

This has been mentioned a couple of times now. Do you know of a good experiment using this setup (birefringent crystals) with a link that we could discuss and eliminate the polarizers?
 
Physics news on Phys.org
  • #252
edguy99 said:
This is "just the basic Malus' law description" with one important difference. When Bob is at 22 degrees, he only has an 85% chance of measuring a vertical photon hence the drop in "coordinated hits" between Bob and Alice (he simply sees it or not). The photons are not somehow reduced in intensity by aligning their electrical vector to the measuring field.

That is the quantum phrasing of Malus' law .. there is no significant difference between the results or the interpretation.

The hidden variable theory proposed in http://arxiv.org/PS_cache/quant-ph/pdf/0205/0205171v1.pdf" just below figure 4 results in the straight line as shown in figure 4. Assuming that Bob (beta in the experiment) has an 85% chance of measuring a photon when at 22 degrees preserves the curved line in figure 4 and the coordinated hits measured by Bob and Alice, ie. if Bob does not measure the photon, you don't have a coordinated hit and Alices measurements of coordinated hits must have also dropped "instantly" even though she did not do anything.

Detection probabilities are taken into account in the development of the equations for both the actual experiment, and the CHSH inequality used in that paper. A correlation count can only be established by comparison of the two sets of results, so the concept of an "instant drop" in the correlation count is ill defined.
 
Last edited by a moderator:
  • #253
JDoolin said:
1. I gather that Bell's theorem is "sufficient" to prove that Quantum Mechanics violates locality, or something like that... But is it really necessary? I'm arguing from some ignorance, because I can't recall Bell's theorem, but it seems like, when I did see it's derivation, some years ago, it was a matter of formal logic; having nothing to do with experiment whatsoever. At the time, I had no doubt that Bell's theorem was true. (That's the nature of a theorem.) If I recall correctly it was a fairly simple derivation that could be explained in 15 minutes or so on a chalk board. In the same lecture though, the results of a quantum mechanics experiment was described--just the results, mind you, not the experiment itself. The most difficult part was to see how it was that they were able to abstract the results of the experiment down to something to which one could apply Bell's Theorem; or why one would bother.

By itself, this is weird enough that I'd say you have some kind of action at a distance. A sort of non-local wave collapse. You don't have to bring up anything called "Bell's Theorem" unless you want to show me a formal proof of something that you've already convinced me of. In fact, I'm not really entirely surprised that there is something strange going on, because interference effects, (two-slit experiment, diffraction, etc) already exhibit a possibly related wave-collapse phenomenon.

2. But now we should also bring up the exciting aspect of the experiment. When I receive a photon through one receiver or another, Can I use this as some form of faster-than-light communication? ...

1. To understand why Bell is needed, let's return to the original EPR situation in which we imagine there is a more complete specification of the system possible. For example, perhaps there are hundreds of hidden elements which lead us to see the so-called perfect correlations envisioned by EPR - and you would need a lot to get these correlations. Now, you may consider this implausible, but it does show why we need Bell.

2. What is being graphed in your attached example is P(a+b), which is the coincidence rate. Nothing changes visibly on either side when looking at that side alone. So no signaling is possible.

Other than that, I pretty well agree with you.
 
  • #254
edguy99 said:
Hey, where did you get a picture of my dog? The photons in the experiment start out linear polarized in a specific direction so are generally talked about as up or down in this type of experiment, hence the reference.

That also looks like my dog when I open the door and look the other way for a second. :smile:

You cannot start out with knowledge of the polarization (say as up) and expect correlations which follow the cos^2 rule.

For example: Alice set at 22.5 degrees, Bob same, the match % with polarizers will be: 73%/2 (PBS would be twice that) with Type I non-polarization entangled pairs.

On the other hand, with Type I polarization entangled pairs, the match % would be 85%/2.

My point is that your basic premise itself (how polarization is observed) can be experimentally tested directly, and found to be incorrect. This is separate from attempting to model as a Bell inequality. It fails before you get that far.

In addition, the entire point of Bell is to demonstrate that realistic solutions are not possible. You have yet to demonstrate realism. That requires providing an answer to the value of counterfactual measurements. I.e. the values for 3 settings I pick across a group of photons. If you want me to explain the rules for that, I would be happy to. Then you will see the problem more clearly with your idea.
 
  • #255
edguy99 said:
This has been mentioned a couple of times now. Do you know of a good experiment using this setup (birefringent crystals) with a link that we could discuss and eliminate the polarizers?

http://arxiv.org/abs/quant-ph/9810080

Violation of Bell's inequality under strict Einstein locality conditions, Gregor Weihs, Thomas Jennewein, Christoph Simon, Harald Weinfurter, Anton Zeilinger

This is one of the primary references in scholarly articles. This is the top echelon of researchers.
 
  • #256
SpectraCat said:
No, you can't use it for FTL communication. The simplest explanation as to why is that, for any given measurement at one end of the channel (call it your end), you cannot know a priori whether or not there has already been a measurement at the other end of the channel that determined the result at your end. In other words, if you set your polarizer at 45 degrees and detect a photon, does that mean a measurement at the other end of the channel was done "first" at (for example) 135 degrees, determining the result at your end? Or does it mean that your measurement was done "first", determining the result of your partners "future" measurement at the other end of the channel?

Note that "first" and "future" are in quotes because statements about the relative orders of events in reference frames with a space-like separation need to be carefully qualified, and we have not done that here.

I wouldn't think it would matter whether it is measured first at the beta end or first at the alpha end. Cosine is an even function, so if you get Cos(a-b) or Cos(b-a) you would get the same result.

Besides which, as you mention, the two events are separated by a "space-like" interval; not a "time-like" interval, so effectively, in half of the reference frames a is before b, and in another half of the reference frames b is before a, and, of course, in some specifically defined reference frames, the two events are simultaneous.
 
  • #257
DrChinese said:
1. To understand why Bell is needed, let's return to the original EPR situation in which we imagine there is a more complete specification of the system possible. For example, perhaps there are hundreds of hidden elements which lead us to see the so-called perfect correlations envisioned by EPR - and you would need a lot to get these correlations. Now, you may consider this implausible, but it does show why we need Bell.

2. What is being graphed in your attached example is P(a+b), which is the coincidence rate. Nothing changes visibly on either side when looking at that side alone. So no signaling is possible.

Other than that, I pretty well agree with you.

The way I understood the graph is it shows the N(A and B) which is the number of events where both A and B were detected at the same time.

Now you seem to be saying that the overwhelming majority of photons detected at A and B are non-coincident, so that the slight change caused by this effect would be miniscule? I could see how it might be miniscule or perhaps statistically too small to measure, but I don't understand how it could be zero.
 
  • #258
JDoolin said:
I wouldn't think it would matter whether it is measured first at the beta end or first at the alpha end. Cosine is an even function, so if you get Cos(a-b) or Cos(b-a) you would get the same result.

What you seem to be missing is that in order to transmit information over such a channel, the person receiving the transmission must know both a and b. However, in order to transmit information, the person sending the information must be free to change one of those parameters. Furthermore, cos(a-b) defines the coincidence rate (or coincidence probability) ... in order to transmit information, you would have to know about specific coincident events at both ends of the channel. That obviously requires a comparison step, and thus a lightspeed (or slower) channel.

Just think about being at one end of such a channel with a space-like separation to the other end. What can you do? You can choose the angle of your polarizer (lets say b), and record photon detection events. Let's assume that the photons arrive at a known, constant rate of 1 per second. What do you see? Each second you check your detector to see if a photon was registered, detection events count as 1's, non-detection events as 0's. How can you extract information from that channel?

[EDIT: That last question was poorly phrased ... I meant to ask, how can you know that the information you are receiving is due to manipulations performed at the other end of the channel, rather than just random noise?]

Besides which, as you mention, the two events are separated by a "space-like" interval; not a "time-like" interval, so effectively, in half of the reference frames a is before b, and in another half of the reference frames b is before a, and, of course, in some specifically defined reference frames, the two events are simultaneous.

Yup.
 
Last edited:
  • #259
JDoolin said:
The way I understood the graph is it shows the N(A and B) which is the number of events where both A and B were detected at the same time.
You have it right. So how would you know N(A and B) unless both sides were in communication? And what method do you plan to use to get that information?

And just to be clear: the intensity on either detector never changes.
 
  • #260
edguy99 said:
Assuming that Bob (beta in the experiment) has an 85% chance of measuring a photon when at 22 degrees preserves the curved line in figure 4 and the coordinated hits measured by Bob and Alice, ie. if Bob does not measure the photon, you don't have a coordinated hit and Alices measurements of coordinated hits must have also dropped "instantly" even though she did not do anything.

Are you back to invisible photons? Those won't enter into any experimental statistics anywhere. Or?
 
  • #261
DrChinese said:
You have it right. So how would you know N(A and B) unless both sides were in communication? And what method do you plan to use to get that information?

And just to be clear: the intensity on either detector never changes.

This might be easier to resolve if I had information on exactly what the intensity (in photons per second) is actually received by either detector. I believe I have only been given the number of coincident events at around 300 every 10 seconds, but not the total intensity.

If we could assure ZERO non-coincident photons, then you'd have a number between 0 and 100% of max. If it is 90% non-coincident photons, then you'd have a signal between 90 and 100% of max. If it is made up of 99.99% non-coincident photons, then you'd get a signal between 99.99% and 100% of the maximum, and you might as well say "the intensity on either detector never changes, because the change would be statistically insignificant.
 
  • #262
JDoolin said:
This might be easier to resolve if I had information on exactly what the intensity (in photons per second) is actually received by either detector. I believe I have only been given the number of coincident events at around 300 every 10 seconds, but not the total intensity.

If we could assure ZERO non-coincident photons, then you'd have a number between 0 and 100% of max. If it is 90% non-coincident photons, then you'd have a signal between 90 and 100% of max. If it is made up of 99.99% non-coincident photons, then you'd get a signal between 99.99% and 100% of the maximum, and you might as well say "the intensity on either detector never changes, because the change would be statistically insignificant.

I will say it again: the intensity at either detector NEVER changes (beyond normal deviations). About 50% of the incident photons come through Alice's polarizer. This is true regardless of what Bob does. Or whether Bob does anything at all.

You can see the separate intensity for Alice and Bob in the experiment as N(A) and N(B). That looks to be about 85,000 per run IIRC.
 
  • #263
DrChinese said:
I will say it again: the intensity at either detector NEVER changes (beyond normal deviations). About 50% of the incident photons come through Alice's polarizer. This is true regardless of what Bob does. Or whether Bob does anything at all.

You can see the separate intensity for Alice and Bob in the experiment as N(A) and N(B). That looks to be about 85,000 per run IIRC.

Is that 85,000 photon events in each 10 second run? (Edit: Out of which 300 are "coincident" events?)
 
  • #264
I think so, although that seems high to me. I assume because this is an undergrad setup and the controls don't need to be too tight. If you look at the Weihs et al paper, they get a much higher rate of matches.
 
  • #265
DrChinese said:
I think so, although that seems high to me. I assume because this is an undergrad setup and the controls don't need to be too tight. If you look at the Weihs et al paper, they get a much higher rate of matches.

I guess the more relevant question is what's the standard deviation. Is it
85,000 +/- 1,000, or
85,000 +/- 100, or
85,000 +/- 1 event
per 10 seconds?

If the "noise" is constant enough, then you should be able to detect a change of 300 events per second. But if the noise regularly varies from 83,000 to 87,000, then a change of 300 might not be noticed.

If you're saying the intensity is constant, but the 300 counts per 10 seconds would be statistically insignificant in the measurment of the intensity anyway, I can make sense of that. It''s just the change can't be detected over the noise.

But if the intensity were EXACTLY the same, while the coincidence events (detection of entangled particles) went DOWN, that would mean there had to be some INCREASE in the number of non-coincidence (detection of non-entangled particles) events. Where would these extra non-coincidence events come from?
 
  • #266
JDoolin said:
I guess the more relevant question is what's the standard deviation. Is it
85,000 +/- 1,000, or
85,000 +/- 100, or
85,000 +/- 1 event
per 10 seconds?

If the "noise" is constant enough, then you should be able to detect a change of 300 events per second. But if the noise regularly varies from 83,000 to 87,000, then a change of 300 might not be noticed.

If you're saying the intensity is constant, but the 300 counts per 10 seconds would be statistically insignificant in the measurment of the intensity anyway, I can make sense of that. It''s just the change can't be detected over the noise.

But if the intensity were EXACTLY the same, while the coincidence events (detection of entangled particles) went DOWN, that would mean there had to be some INCREASE in the number of non-coincidence (detection of non-entangled particles) events. Where would these extra non-coincidence events come from?

It would be so much easier to forget the polarizer example and switch to the PBS example because clearly that is causing a degree of confusion. I hope you see that if there was a PBS, every photon would emerge as a + or a -. That intensity does NOT change for Alice regardless of anything Bob does. Just as importantly, the + intensity and the - intensity will be nearly equal, and that ratio will not change either.

Do you see why? In other words, you are trying to imagine an effect which does not exist. Many folks get confused about the absorption of photons by a polarizer and get lost in analyzing that. The effect to look for is the coincidence rate varying according to the cos^2 rule predicted by QM versus one of the other functions you get with a local realistic model. There is nothing that ever changes at Alice as a result of what happens at Bob EXCEPT as it appears which you count matches versus non-matches (or similar).

The reason that no one cares about photons which cannot be paired is that they don't fit the criteria of an entangled pair. We are interested only in creating pairs that fit this criteria and analyzing those. So if there were nothing but paired events - no noise - there would still be no change in intensity at Alice based on anything Bob does. And vice versa.
 
  • #267
DrChinese said:
It would be so much easier to forget the polarizer example and switch to the PBS example because clearly that is causing a degree of confusion. I hope you see that if there was a PBS, every photon would emerge as a + or a -. That intensity does NOT change for Alice regardless of anything Bob does. Just as importantly, the + intensity and the - intensity will be nearly equal, and that ratio will not change either.

Do you see why? In other words, you are trying to imagine an effect which does not exist. Many folks get confused about the absorption of photons by a polarizer and get lost in analyzing that. The effect to look for is the coincidence rate varying according to the cos^2 rule predicted by QM versus one of the other functions you get with a local realistic model. There is nothing that ever changes at Alice as a result of what happens at Bob EXCEPT as it appears which you count matches versus non-matches (or similar).

The reason that no one cares about photons which cannot be paired is that they don't fit the criteria of an entangled pair. We are interested only in creating pairs that fit this criteria and analyzing those. So if there were nothing but paired events - no noise - there would still be no change in intensity at Alice based on anything Bob does. And vice versa.

No, quite likely, I'll have to start over from scratch to understand. The only assumption I'm aware of making is that the total number of photons detected must equal the number of nonentangled photons + the number of entangled photons. But there may be any number of other things I'm overlooking.

By my reasoning, if you used a birefringent crystal which passed 100% of the incoming photons to one of the two polarizations, you would STILL have a small difference between the values.

For instance using the graph from the paper I used before, and the number 85,000 you gave me earlier, if you aligned alpha with beta, you would get
84125+300=85,125 hits on the aligned axis and 84125+50=84875 hits on the non-aligned axis. A 0.3% difference.

But if you had alpha and beta at a 45 degree angle from each other, you would get 84125+175=85,000 hits through both channels; a 0% difference.


(By the way, I'm not sure what PBS stands for.)
 
  • #268
DrChinese said:
I think so, although that seems high to me. I assume because this is an undergrad setup and the controls don't need to be too tight. If you look at the Weihs et al paper, they get a much higher rate of matches.

I see the coincidences in the 0 to 800 range for 5 second intervals, but I don't see any measure of the N(A) or N(B) for a 5 second interval?
 
  • #269
JDoolin said:
... the two events are separated by a "space-like" interval; not a "time-like" interval, so effectively, in half of the reference frames a is before b, and in another half of the reference frames b is before a, and, of course, in some specifically defined reference frames, the two events are simultaneous.

Great quote.
 
  • #270
JesseM said:
Another totally-confident-yet-totally-ignorant argument from miosim …
… The second "mumbled" statement has nothing to do with how Bell ultimately defines "local causality", it's just meant as a "trivial" and "ad hoc" model that he starts out with as an example, then shows it doesn't work and abandons it.
Did Bell abandon his distorted model/example of EPR? No, he diddn’t according to reference below:

http://www.scholarpedia.org/article/Bell's_theorem#S11a
“…The proof of Bell's theorem is obtained by combining the EPR argument (from locality and certain quantum predictions to pre-existing values) and Bell's inequality theorem…”

Apparently Bell didn’t abandon his model of the EPR argument and admitted that this is the best model he (or anybody else) can build:
"Of course this trivial model was just the first one we thought of, and it worked up to a point. Could we not be a little more clever, and device a model which reproduces the quantum formulae completely? No. It cannot be done, so long as action at a distance is excluded."

At the same time I am a bit confused about how Bell uses EPRB arguments. “…The EPRB correlations are such that the result of the experiment on one side immediately foretells that on the other, whenever the analyzers happen to be parallel…”.

I don’t know if "immediately foretells" is a part of the EPRB argument. I am not sure if this is another distortion of Einstein’s views or a distortion caused by Bohm. My interpretation of Einstein’s views is based on his EPR paper (1935). Therefore if I refer to EPR, I mean this specific paper.
miosim said:
Bell (and his supporters) just forgets that the EPR particles are represented by the two independent wave functions and therefore their cos^2 behavior are identical to Bell’s QM model.
JesseM said:
…They're not represented by "two independent wave functions" in QM, they're represented by a single wavefunction representing the entangled two-particle system. Bell is proving that no local theory can reproduce the QM prediction which is based on this single (nonlocal) wavefunction.
I am not talking about the QM interpretation, but about the EPR interpretation (in the EPR paper) of particles that are represented by the two independent wave functions. http://prola.aps.org/pdf/PR/v47/i10/p777_1. Did you read the original paper?
JesseM said:
… Again, Bell's definition is exactly equivalent to my 1) and 2) … If you want to engage Bell's argument, you need to try to think about these basic assumptions, not some strawman based on your lack of reading comprehension.
No I don’t want to engage Bell's arguments that are based on the profoundly distorted initial conditions.
JesseM said:
And just for your information, Bell wasn't in the least bit sympathetic to Copenhagen, he much preferred nonlocal hidden-variable theories which try to give an objective picture of what's really going on with quantum systems
By trying to provide an objective picture of what's really going on with quantum systems, Bell violated the “religious foundation” of QM built by Bohr and Heisenberg.
Bohr: "There is no quantum world. There is only an abstract quantum mechanical description. It is wrong to think that the task of physics is to find out how Nature. Physics concerns what we can say about Nature".

By violating this “foundation” Bell opened the "can with worms." He expended the scale of the wave function collapse and revealed its non-sense. This non-sense is called ‘non-local interactions’.

Jonathan Scott said:
You seem to be missing the point by increasing amounts on each attempt!
The cos^2 behaviour of two independent particles leads to only half of the correlation values predicted by QM and confirmed by experiment.
Aspect may disagree with you. From the Bell’s Theorem : The Naive View Of An Experimentalist
“… a straightforward application of Malus law shows that a subsequent measurement performed along b on photon ν 2 will lead to P (a, b) = cos^2(a,b) …”

The author of the paper below may also disagree with you:

http://bib.irb.hr/datoteka/287013.pavicic-prd90.pdf
“…The recognition of Scully and Milomni’s theory as a theory which makes the quantum Malus law work for composite systems was clue for the poof...”
“… In other words, although being structurally different, HV on and QM predict the same experimental outcome…”
Jonathan Scott said:
Bell's theorem is NOT based on ANY classical model; such models are only used as examples to illustrate the theory.
First, it is obviously a misleading illustration of EPR argument.
Second, as I understand, Bell’s inequalities were derived based on this specific illustration. I don't see any other model he used to compare with QM. Do you mean he compared QM with ‘locality’ in general? In this case please refer a formula to me of this "label of locality" and how it appears in Bell’s inequalities.
 
  • #271
JDoolin said:
No, quite likely, I'll have to start over from scratch to understand. The only assumption I'm aware of making is that the total number of photons detected must equal the number of nonentangled photons + the number of entangled photons. But there may be any number of other things I'm overlooking.

By my reasoning, if you used a birefringent crystal which passed 100% of the incoming photons to one of the two polarizations, you would STILL have a small difference between the values.

For instance using the graph from the paper I used before, and the number 85,000 you gave me earlier, if you aligned alpha with beta, you would get
84125+300=85,125 hits on the aligned axis and 84125+50=84875 hits on the non-aligned axis. A 0.3% difference.

But if you had alpha and beta at a 45 degree angle from each other, you would get 84125+175=85,000 hits through both channels; a 0% difference.


(By the way, I'm not sure what PBS stands for.)

To come to an apples to apples basis, you would get something like this instead of what you calculated:

If you had alpha and beta at a 0 degree angle from each other, you would get 85,000 hits through both channels; a 0% difference. There would be 350 matches.

If you had alpha and beta at a 45 degree angle from each other, you would get 85,000 hits through both channels; a 0% difference. There would be 175 matches.

The only thing that varies is the number of matches. And the formula which describes it needs the relative difference - a quantum nonlocal value - as its prime variable.
 
  • #272
edguy99 said:
I see the coincidences in the 0 to 800 range for 5 second intervals, but I don't see any measure of the N(A) or N(B) for a 5 second interval?

They didn't supply this value as I read it either. Since they define entangled as timetags within 6 ns, everything else is ignored. As the time window is increased, you get a lower value of S because a few unentangled* photon pairs are being considered.

*This may seem surprising, but pairs can be partially entangled. Anywhere between 0 and 100% fidelity, actually.
 
  • #273
JDoolin said:
(By the way, I'm not sure what PBS stands for.)

Oops, my bad. PBS = Polarizing Beam Splitter. A PBS, like a polarizer filter, can be oriented at any angle across 360 degrees. However, it separates the incident (incoming) beam into 2 output channels, which are called H and V. Or + and -. The designation is arbitrary, as it is the angle of orientation that controls.

http://en.wikipedia.org/wiki/Beam_splitter

These remove any doubt that there is some selection process going on within the PBS itself as you get 4 permutations of coincidences, 2 of which are matches and 2 of which are mismatches.
 
  • #274
miosim said:
Did Bell abandon his distorted model/example of EPR? No, he diddn’t according to reference below:

http://www.scholarpedia.org/article/Bell's_theorem#S11a
“…The proof of Bell's theorem is obtained by combining the EPR argument (from locality and certain quantum predictions to pre-existing values) and Bell's inequality theorem…”

Apparently Bell didn’t abandon his model of the EPR argument
miosim, your reading comprehension sucks, you read stuff you obviously don't understand in the slightest and then extract a few keywords and come up with a fantasy interpretation of what you think it means that is designed to make Bell look bad. Yes, he used the EPR argument, but where the hell do you get the idea that this had anything whatsoever to do with the "trivial ad hoc space-time picture of what might go on" which he briefly introduced in the Bertlmann's socks paper (and none of his other papers) and quickly tossed aside for a more broad definition of local causality? Of course nothing in the scholarpedia article suggested anything of the sort, nor did EPR propose any sort of "trivial ad hoc space-time picture" in their paper, this is a pure fantasy that popped into your head and you immediately seized on it because it fits your ignorant preconceptions.

The "EPR argument" in this case just refers to the idea that if the values of some quantities measured at different locations are found to be perfectly correlated, then these values must have already been predetermined by local variables prior to measurement (an idea which Einstein agreed with, his two-box analogy illustrated exactly this sort of idea). As I pointed out in an [post=3275052]earlier post[/post] this follows directly from my 1) and 2) which are equivalent to the most general argument Bell makes in his "La nouvelle cuisine" paper:
In this statement, I was attempting to be as general as Bell in my definition of local realism--some of the inequalities he derived did not depend on the assumption of a perfect correlation between separated measurements, and thus in some of his papers he defined "local causality" in as broad a way as possible so that knowledge of past conditions would not predetermine the measurement results with perfect certainty. I agree, as would Bell, that if you are looking at one of the inequalities that does assume a perfect correlation between measurements with the same detector setting, in that case it must be true that the measurement outcome was predetermined prior to measurements, that there is no probabilistic element at all. This conclusion can in fact be derived from the more general assumptions about local realism, which is why it doesn't need to be a starting assumption if you want to make your proof as general as possible.
I also quoted from p. 11 of this paper which says pretty much the same thing:
There is, in particular, a tendency for a relatively superficial focus on the relatively formal aspects of Bell’s arguments, to lead commentators astray. For example, how many commentators have too-quickly breezed through the prosaic first section of Bell’s 1964 paper (p. 14-21) – where his reliance on the EPR argument “from locality to deterministic hidden variables” is made clear – and simply jumped ahead to section 2’s Equation 1 (p. 15), hence erroneously inferring (and subsequently reporting to other physicists and ultimately teaching to students) that the derivation “begins with deterministic hidden variables”? (1981, p. 157)
Again, the idea is that you start with basic local realist assumptions like my 1) and 2) (again equivalent to his assumptions in the "La nouvelle cuisine" paper, I can point out how if you are interested in actually making an effort to think about these assumptions), then it's not hard to show that in the case of perfectly correlated outcomes this implies the results of these measurements were predetermined by local variables associated with each particle before you made your measurements. That's the very general notion that the "EPR argument" refers to, not some much more specific classical model that Bell just offered in one paper as an example before disposing of it.
miosim said:
and admitted that this is the best model he (or anybody else) can build:
"Of course this trivial model was just the first one we thought of, and it worked up to a point. Could we not be a little more clever, and device a model which reproduces the quantum formulae completely? No. It cannot be done, so long as action at a distance is excluded."
Uh, that sentence doesn't say "it's the best model", it just says that no other local model will be able to "reproduce the quantum formulae completely" either--a statement that he then goes on to prove in subsequent sections of the same paper, using arguments that have nothing to do with that original model.

Once again, you seem to jump to ridiculous interpretations of sentences that will serve your desperate need to "prove" Bell wrong, never even considering that perhaps the first interpretation that came into your head might not be the right one and that there might be other ways of reading it that don't make Bell into the cartoon idiot you want him to be. It's a basic principle of reading comprehension that you have to consider the possibility that the same sentence may be interpreted in different ways, and if your first interpretation makes the author out to be saying something completely foolish, instead of seizing on that interpretation so you can discount him, you need to take the time to consider whether there may be more "charitable" alternate interpretations (and if you can't think of any, ask defenders of the argument about it instead of jumping to conclusions). In philosophy and rhetoric, this idea goes by the name of the principle of charity. If you continue to ignore this principle, your reading comprehension will continue to suck.
miosim said:
At the same time I am a bit confused about how Bell uses EPRB arguments. “…The EPRB correlations are such that the result of the experiment on one side immediately foretells that on the other, whenever the analyzers happen to be parallel…”.

I don’t know if "immediately foretells" is a part of the EPRB argument. I am not sure if this is another distortion of Einstein’s views or a distortion caused by Bohm.
Of course it's not a distortion, Einstein's own two-box analogy, which he used to clarify what his intended meaning had been, was clearly describing exactly this sort of situation where knowing the result of one measurement (seeing whether your box has a ball in it) tells you the result of the other (if yours had a ball the other is empty and vice versa). The EPR paper also made very clear they were talking about situations where knowledge of one measurement tells you what the result of the same measurement on the other particle would be, on p. 1 they say: "A comprehensive definition of reality is, however, unnecessary for our purpose. We shall be satisfied with the following criterion, which we regard as reasonable. If, without in any way disturbing a system, we can predict with certainty (i.e. with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity." The entirety of the subsequent argument in the EPR paper is based on this idea.
miosim said:
No I don’t want to engage Bell's arguments that are based on the profoundly distorted initial conditions.
So you don't want to try to understand what Bell was actually saying and "engage" with that, you just want to go on making up fantasy strawmen that you can easily knock down? By the way "engage with" isn't code for "agree with", it just means understanding Bell's actual position and arguing based on that.
miosim said:
By trying to provide an objective picture of what's really going on with quantum systems, Bell violated the “religious foundation” of QM built by Bohr and Heisenberg.
Bohr: "There is no quantum world. There is only an abstract quantum mechanical description. It is wrong to think that the task of physics is to find out how Nature. Physics concerns what we can say about Nature".
Uh, QM is not a religion and Bohr is not the pope. In case you missed it, Einstein's whole purpose was to try to find an "objective picture" of this type, he was mocking the idea that the "abstract quantum mechanical description" could be complete with his two-boxes analogy:
"In front of me stand two boxes, with lids that can be opened, and into which I can look when they are open. This looking is called 'making an observation.' In addition there is a ball, which can be found in one or the other of the two boxes where an observation is made. Now I describe a state of affairs as follows: The probability is one-half that the ball is in the first box." (This is all the Schrödinger equation will tell you.) "Is this a complete description?" asks Einstein, and then gives two different answers.

"NO: A complete description is: the ball is (or is not) in the first box...

"YES: Before I open the box the ball is not in one of the two boxes. Being in a definite box only comes about when I lift the covers...

"Naturally, the second 'spiritualist' or Schrödingerian interpretation is absurd," Einstein continued tactfully, "and the man on the street would only take the first, Bornian, interpretation seriously."
But even aside from Einstein's views, saying that Bell was somehow wrong to try to find an objective picture in no way shows that Bell's theorem itself is incorrect. After all, Bell's theorem just proves that any attempt to find an objective picture obeying the principle of locality will inevitably fail to match the QM predictions. If you agree with Bohr then you may consider this result uninteresting since you never believed it would be possible to find an objective local picture in the first place, but you would have no reason to say it's actually incorrect at a technical level.
miosim said:
By violating this “foundation” Bell opened the "can with worms." He expended the scale of the wave function collapse and revealed its non-sense. This non-sense is called ‘non-local interactions’.
Nope, Bell's theorem makes no positive claim of the objective existence of "non-local interactions", it just proves the negative claim that any attempt to create an objective picture which features no non-local interactions will inevitably fail. This does not mean you have to adopt an objective picture that features non-local interactions, you can also just abandon the notion of any objective picture at all, as Bohr would have recommended.
Jonathan Scott said:
You seem to be missing the point by increasing amounts on each attempt!
The cos^2 behaviour of two independent particles leads to only half of the correlation values predicted by QM and confirmed by experiment.
miosim said:
Aspect may disagree with you. From the Bell’s Theorem : The Naive View Of An Experimentalist
“… a straightforward application of Malus law shows that a subsequent measurement performed along b on photon ν 2 will lead to P (a, b) = cos^2(a,b) …”
In this statement (from near the bottom of p. 5 of the paper) Aspect is making the point that if you treat the measurement of photon v1 with a polarizer at angle a as causing a "collapse" of photon v2 into the same polarization state as v1, then since Malus' law says that a subsequent measurement of v1 at angle b would give a probability of (1/2)*cos^2(a,b) that v1 passes through b, it must also be true that if v2 is first measured with a polarizer at angle b after its state has already been "collapsed" by the measurement of v1 at angle a (with a 1/2 probability v1 made it through), then it must also be true that v2 has a probability of (1/2)*cos^2(a,b) of making it through (or a probability of cos^2(a,b) that v2 makes it through if we already know v1 made it through). This conclusion isn't based on the classical Malus' law alone, it's based on the combination of Malus' law applied to two successive measurements of v1, and then adding the assumption that a single measurement of v1 causes an instantaneous "collapse" of v2 into the same state. There would be no way to reproduce this correlation (for arbitrary choice of angles a and b) in a local classical universe where Malus' law applied but there was no instantaneous "collapse" of the 2-particle wavefunction when one particle was measured.
 
Last edited:
  • #275
DrChinese said:
To come to an apples to apples basis, you would get something like this instead of what you calculated:

If you had alpha and beta at a 0 degree angle from each other, you would get 85,000 hits through both channels; a 0% difference. There would be 350 matches.

If you had alpha and beta at a 45 degree angle from each other, you would get 85,000 hits through both channels; a 0% difference. There would be 175 matches.

The only thing that varies is the number of matches. And the formula which describes it needs the relative difference - a quantum nonlocal value - as its prime variable.

Hmmmm. Well, that suggests to me a different interpretation. Well, it forces me to make a realization that probably seems really, really obvious to you: we know where all of the non-coincident photons are coming from!

The non-coincident photons are coming from the down-conversion crystals, just like the coincident photons are. (my earlier silliness came from thinking that the non-coincident photons were from some unknown independent source, and should stay constant.)

What probably could have been explained to me is to realize that within the down-conversion crystals, the violet light excites the atoms, and then two red photons come out. With some of the atoms, the two red photons come out simultaneously; and can be registered. But with most, you get the two red photons coming out at different times, so you can't match them to their "twin."

I still may have to think through it from scratch, but I think it's a little clearer.
 
  • #276
JDoolin said:
Hmmmm. Well, that suggests to me a different interpretation. Well, it forces me to make a realization that probably seems really, really obvious to you: we know where all of the non-coincident photons are coming from!

The non-coincident photons are coming from the down-conversion crystals, just like the coincident photons are. (my earlier silliness came from thinking that the non-coincident photons were from some unknown independent source, and should stay constant.)

What probably could have been explained to me is to realize that within the down-conversion crystals, the violet light excites the atoms, and then two red photons come out. With some of the atoms, the two red photons come out simultaneously; and can be registered. But with most, you get the two red photons coming out at different times, so you can't match them to their "twin."

I still may have to think through it from scratch, but I think it's a little clearer.

Sums it up. There are any number of things that can affect entanglement. But the (effectively, per time window) simultaneous arrival means you have a pretty good pair.

I am not always aware of how familiar folks are with the various component processes. So I probably could have explained a few things better.
 
  • #277
JesseM said:
miosim, your reading comprehension sucks, you read stuff you obviously don't understand in the slightest and then extract a few keywords and come up with a fantasy interpretation of what you think it means that is designed to make Bell look bad. Yes, he used the EPR argument, but where the hell do you get the idea that this had anything whatsoever to do with the "trivial ad hoc space-time picture of what might go on" which he briefly introduced in the Bertlmann's socks paper (and none of his other papers) and quickly tossed aside for a more broad definition of local causality?
My fault.
JesseM said:
Once again, you seem to jump to ridiculous interpretations of sentences that will serve your desperate need to "prove" Bell wrong, never even considering that perhaps the first interpretation that came into your head might not be the right one and that there might be other ways of reading it that don't make Bell into the cartoon idiot you want him to be. …
At the beginning, I did consider that I just misunderstood Bell, because everybody seems to agree with him. However I have an impression that Bell was under influence of conviction/belief and this is the best environment for the most foolish ideas to flourish. Science in this regards is not much different from religion.
Regarding the majority of physicists who accept Bell’s proof, I think that they are more mathematicians than physicists, meaning that for them it is more natural to think about physical processes in terms of formulas than to visualize those processes. My impression is that since emergence of QM the theoretical physics have been “hijacked” by mathematicians that transformed physics into applied mathematics.
At the same time I have my doubt also.
miosim said:
I don’t know if "immediately foretells" is a part of the EPRB argument. I am not sure if this is another distortion of Einstein’s views or a distortion caused by Bohm.
JesseM said:
Of course it's not a distortion, Einstein's own two-box analogy, which he used to clarify what his intended meaning had been, was clearly describing exactly this sort of situation where knowing the result of one measurement (seeing whether your box has a ball in it) tells you the result of the other (if yours had a ball the other is empty and vice versa). The EPR paper also made very clear they were talking about situations where knowledge of one measurement tells you what the result of the same measurement on the other particle would be …
Apparently I get suspicious over the world "immediately" that is irrelevant to the EPR argument but is appropriate within Bell’s views.
 
  • #278
miosim said:
“… a straightforward application of Malus law shows that a subsequent measurement performed along b on photon ν 2 will lead to P (a, b) = cos^2(a,b) …”
JesseM said:
In this statement (from near the bottom of p. 5 of the paper) Aspect is making the point that if you treat the measurement of photon v1 with a polarizer at angle a as causing a "collapse" of photon v2 into the same polarization state as v1, …
…There would be no way to reproduce this correlation (for arbitrary choice of angles a and b) in a local classical universe where Malus' law applied but there was no instantaneous "collapse" of the 2-particle wave function when one particle was measured.
However within the EPR paper (1935) there are no “classical” particles. Instead there are two independent QM entities (described by the individual wave function) and the Quantum Malus’ law should be fully applicable to them. Why Bell ‘stripped’ EPR particles from their QM privileges?

Let’s look closely into Bell/Aspect/Your assumption that “… a single measurement of v1 causes an instantaneous "collapse" of v2 into the same state.” That means that polarizer doesn’t affect polarization of entangled photons but acts as a ON/OFF gate only.

Let’s test this assumption using Aspect’s experimental setup.
1). First let's fully align polarizers A and B and observe a maximum (say 100%) correlation.
2). Set polarizer A and B at 90 deg and observe zero correlation.
3). Let’s add one more polarizers (C) between polarizer B and the source of photon. Let’s set this polarizers at intermediate angle of 45 deg and monitor correlated photons.

1. According to EPR model we should observe about 25% of correlated photons, because EPR photons on B side will be rotated by polarizers (B and C) and the intensity/probability of these photons could be calculated according to Malus’ law.

2. However According to Bell concept, we still should have zero correlation; otherwise we have to accept that photons A and B don’t have the same polarization any more.

It seems to me that Bell theorem is in conflict not only with EPR but with Malus’ law also.
 
  • #279
miosim said:
At the beginning, I did consider that I just misunderstood Bell, because everybody seems to agree with him.
Since it's clear you still understand nothing about Bell's argument (as evidenced by the fact that you couldn't even tell his "trivial ad hoc space-time picture" apart from his actual argument), why did you decide this wasn't the case? I guess you're just a ridiculously arrogant person who thinks he doesn't need to understand a theory in detail before trusting his gut feeling that it must be wrong?
miosim said:
However I have an impression that Bell was under influence of conviction/belief
The irony is rich!
miosim said:
Apparently I get suspicious over the world "immediately" that is irrelevant to the EPR argument but is appropriate within Bell’s views.
Wait, are you saying you think the EPR argument doesn't imply that measuring one particle would "immediately" tell us what the result of an identical measurement on the other would be? You think there would be some delay in your ability to make such a prediction? If so you're wrong, EPR's argument also says you would have such immediate knowledge, just like in Einstein's two-box analogy where looking inside one box immediately tells you whether the other box has a ball in it or is empty.
miosim said:
However within the EPR paper (1935) there are no “classical” particles. Instead there are two independent QM entities (described by the individual wave function)
No! In QM there is no "individual wavefunction" for each one that allows them to be treated independently, instead there is a single wavefunction for the entangled 2-particle system. So when either one is measured, the wavefunction describing both of them collapses. This is precisely why no local version of Malus' law, like the one that appears in classical physics, can reproduce the correlation predicted by QM.
miosim said:
Why Bell ‘stripped’ EPR particles from their QM privileges?
Because he wanted to investigate whether a local theory, which posited an objective reality independent of our measurements, could reproduce QM predictions. That was also what EPR and Einstein specifically were interested in. That's kind of the whole point, it's amazing that you can spend all this time bloviating about Bell and EPR and not get this.
miosim said:
Let’s test this assumption using Aspect’s experimental setup.
1). First let's fully align polarizers A and B and observe a maximum (say 100%) correlation.
2). Set polarizer A and B at 90 deg and observe zero correlation.
3). Let’s add one more polarizers (C) between polarizer B and the source of photon. Let’s set this polarizers at intermediate angle of 45 deg and monitor correlated photons.

1. According to EPR model we should observe about 25% of correlated photons
What does "EPR model" even mean? EPR don't suggest a specific model, they simply suggest that when you have a perfect correlation when you make the same measurement on both particles, then both particles must have local properties that predetermine what result they will give to that measurement. For example if you know particle #1 can pass it through a polarizer at angle A (assuming that's the first polarizer it encounters), then particle #2 must also have properties that predetermine it would pass through a polarizer at angle A as well (again assuming that's the first it encounters, see the note below on how passing through multiple polarizers might change the properties). Why do you think that fact alone should tell us anything about the probability it will pass through two polarizers at different angles B and C?
miosim said:
2. However According to Bell concept, we still should have zero correlation; otherwise we have to accept that photons A and B don’t have the same polarization any more.
Bell's concept is no different from EPR's, and again you are totally delusional if you think Bell's minimal assumptions say anything about the specific probabilities we should expect in this experiment. Note that in both Bell and EPR's version, it's quite possible that particle #2's properties are changed in some way when it passes through the first polarizer at angle B, so that its probability of passing through C is different from what it would be if C was the first polarizer it had encountered. For example, if the angles of C and A are the same, and particle #1 made it through A, that doesn't necessarily mean that any entangled particle #2 that makes it through B must also make it through C, although we know it definitely would have made it through C if that was the first polarizer it had encountered.
 
Last edited:
  • #280
miosim said:
However within the EPR paper (1935) there are no “classical” particles. Instead there are two independent QM entities (described by the individual wave function)
JesseM said:
No! In QM there is no "individual wavefunction" for each one that allows them to be treated independently, instead there is a single wavefunction for the entangled 2-particle system.
Apparently Einstein didn’t know that. From EPR paper (1935)

“ … let us suppose that we have two systems I and II, which we permit to interact from the time t=0 to t=T, after which time we suppose that there is no longer any interaction between the two parts…
…We see therefore that, as a consequence of two different measurements performed upon the first system, the second system may be left in states with two different wave functions. On the other hand, since at the time of measurement the two systems no longer interact, no real change can take place in the second system in consequence of anything that may be done to the first system. This is, of cause, merely a statement of what is meant by the absence of an interaction between the two systems. Thus, it is possible to assign two different wave functions (…) to the same reality… “
miosim said:
Why Bell ‘stripped’ EPR particles from their QM privileges?
JesseM said:
Because he wanted to investigate whether a local theory, which posited an objective reality independent of our measurements, could reproduce QM predictions. That was also what EPR and Einstein specifically were interested in. That's kind of the whole point …
So Bell was interested to see if there is any difference between original system and its reduced version ”stripped” from its key properties?
JesseM said:
What does "EPR model" even mean? EPR don't suggest a specific model, they simply suggest that when you have a perfect correlation when you make the same measurement on both particles, then both particles must have local properties that predetermine what result they will give to that measurement…
As I understand there are four key properties of EPR model relevant to the Bell’s theorem:

1. After separation the particles have determined and perfectly correlated properties (spin, polarization, etc.).
2. The complementary particles don’t interact after separation.
3. Particle’s behavior is independent from each other and is described by the corresponding independent wave functions.
JesseM said:
For example if you know particle #1 can pass it through a polarizer at angle A (assuming that's the first polarizer it encounters), then particle #2 must also have properties that predetermine it would pass through a polarizer at angle A as well … Why do you think that fact alone should tell us anything about the probability it will pass through two polarizers at different angles B and C?
I don’t worry about the particle #1 while applying the Malus’ law to the particle #2. Whlie passing two consecutive polarizers C (at 45 deg) and B (at 90 deg) the probability (intencity) to pass polarizer by the particle #2 is:

I(final) = I(max) * cos^2(45) * cos^2(90-45) = I(max)*0.25

To comply with Malus’ law particle #2, as I understand, must change its polarization, so the correlated photons #1 and #2 will have different polarizations. This isn’t a problem for EPR model but is prohibited for Bom’s entangled photons that must have identical polarization.
JesseM said:
Bell's concept is no different from EPR's, and again you are totally delusional if you think Bell's minimal assumptions say anything about the specific probabilities we should expect in this experiment. Note that in both Bell and EPR's version, it's quite possible that particle #2's properties are changed in some way when it passes through the first polarizer at angle B, so that its probability of passing through C is different from what it would be if C was the first polarizer it had encountered. For example, if the angles of C and A are the same, and particle #1 made it through A, that doesn't necessarily mean that any entangled particle #2 that makes it through B must also make it through C, although we know it definitely would have made it through C if that was the first polarizer it had encountered.

So what is your prediction for the experiment I described in the previous post and repeated below:

Let’s test this assumption using Aspect’s experimental setup.
1). First let's fully align polarizers A and B and observe a maximum (say 100%) correlation.
2). Set polarizer A and B at 90 deg and observe zero correlation.
3). Let’s add one more polarizers (C) between polarizer B and the source of photon and set this polarizers at intermediate angle of 45 deg and monitor correlated photons.

According to EPR model we should observe about 25% of correlated photons, because EPR photons on B side will be rotated by polarizers (B and C) and the intensity/probability of these photons could be calculated according to Malus’ law.

What is your prediction for Bell’s entangled photons for the step 3). ?
 
  • #281
miosim said:
So what is your prediction for the experiment I described in the previous post and repeated below:

Let’s test this assumption using Aspect’s experimental setup.
1). First let's fully align polarizers A and B and observe a maximum (say 100%) correlation.
2). Set polarizer A and B at 90 deg and observe zero correlation.
3). Let’s add one more polarizers (C) between polarizer B and the source of photon and set this polarizers at intermediate angle of 45 deg and monitor correlated photons.

According to EPR model we should observe about 25% of correlated photons, because EPR photons on B side will be rotated by polarizers (B and C) and the intensity/probability of these photons could be calculated according to Malus’ law.

What is your prediction for Bell’s entangled photons for the step 3). ?
The question you asked does not have anything to do with Aspect's experimental setup. In his set-up, there is a switch before the polariser, so the photons would go either through B or C, not through both. You are just unnecessarily complicating the set-up for no purpose. All the extra polariser does really is absorb some photons that you could be measuring instead, making it harder to do the experiment because there is less signal.

In any case, don't have the time to go through the full calculation, but for this specific set of angles I think the predictions for correlation are the same. There is a reason why Bell test experiments don't generally use 45 and 90 degrees, namely that the effect appears at intermediate angles.

Really don't get what your problem with Bell's theorem is. You need to get away from the idea that it references any specific experimental set-up, assumptions about polarisation or physical model whatsoever.

It is simply a mathematical theorem that anyone with undergrad level statistics knowledge can follow and that is undoubtedly correct in itself. The only thing you can doubt is, if the assumptions made for the proof hold in the case of a specific physical experiment in question. But the assumptions Bell uses are actually very general, they are pretty much:
- you have 2 measurement devices A and B that measure the state of something (represented by a hidden variable or set of hidden variable that is the initial state)
- the result of the measurement by device A depends on the setting of device A and the hidden variable only (but not on the setting of device B)
- the result of the measurement by device B depends on the setting of device B and the hidden variable only (but not on the setting of device A)

Given that these assumptions hold, the correlation between the results of the measurements at A and B will follow Bell's inequalities per the mathematical proof.

Consequently, should we observe that anything in nature does not observe Bell's inequalities - such as Aspect's experiment - we have to conclude that either the experiment had systematic errors affecting the correlations or at least one of the assumptions made for this proof do not hold in nature (for example there are non-local interactions such that the result of device A depends on the result of device B as it does for entangled photons).
 
  • #282
miosim said:
Apparently Einstein didn’t know that. From EPR paper (1935)

“ … let us suppose that we have two systems I and II, which we permit to interact from the time t=0 to t=T, after which time we suppose that there is no longer any interaction between the two parts…
…We see therefore that, as a consequence of two different measurements performed upon the first system, the second system may be left in states with two different wave functions.
Apparently you once again are jumping to conclusions based on isolated quotes you seize on even though you don't really understand them, instead of asking questions like a person with basic intellectual humility would do. Einstein says nothing here about the systems having separate wave functions before measurement, but according to the QM rules, after one entangled particle is measured you can have two independent wave functions for the two particles, see this textbook for example:
The particle undergoes a dramatic change of physical state in the process of measurement. It converts from the entanglement with its distant partner into the disentangled state of its own. In the former state the particle, even though separated from its partner by the vastness of space, did not have its full identity totally independent of the partner's. Their identities remained intimately shared. In the final state, each particle has its own full identity and can be described by a wave function of its own, independent of the rest of the world.

Thus, the measurement made on Earth changes instantaneously the situation not only on Earth, but also on Rulia (and vice versa). In the language of the wave functions, we can say that the wave function of the whole entangled system instantly collapses into one of the two possible independent wave functions:

\Psi = a \mid \uparrow \rangle_1 \mid \downarrow \rangle_2 + b \mid \downarrow \rangle_1 \mid \uparrow \rangle_2 \Rightarrow \,either\, \mid \uparrow \rangle \mid \downarrow \rangle \,or\, \mid \downarrow \rangle \mid \uparrow \rangle

JesseM said:
What does "EPR model" even mean? EPR don't suggest a specific model, they simply suggest that when you have a perfect correlation when you make the same measurement on both particles, then both particles must have local properties that predetermine what result they will give to that measurement…
miosim said:
So Bell was interested to see if there is any difference between original system and its reduced version ”stripped” from its key properties?As I understand there are four key properties of EPR model relevant to the Bell’s theorem:

1. After separation the particles have determined and perfectly correlated properties (spin, polarization, etc.).
Yes, but only for the first measurement of each particle, in QM the second measurement of each won't necessarily be correlated if the second measurement operator doesn't "commute" with the first.
miosim said:
2. The complementary particles don’t interact after separation.
Yes, EPR do assume that.
miosim said:
3. Particle’s behavior is independent from each other
Is this just a restatement of #2? They are not "independent" in the sense of statistical independence (if they were they couldn't give perfectly correlated measurement results), but they are supposed to be causally independent, i.e. they "don't interact after separation".
miosim said:
and is described by the corresponding independent wave functions.
No, there are no "independent wave functions" prior to measurement in QM, and in any case EPR are thinking about the possibility of a local hidden-variables theory which would reproduce the statistics of QM, but it presumably wouldn't use a QM "wave function" to do it because the QM wave function is not clearly a local entity.
miosim said:
I don’t worry about the particle #1 while applying the Malus’ law to the particle #2. Whlie passing two consecutive polarizers C (at 45 deg) and B (at 90 deg) the probability (intencity) to pass polarizer by the particle #2 is:

I(final) = I(max) * cos^2(45) * cos^2(90-45) = I(max)*0.25
That cos^2(45) term doesn't really make sense, the particles aren't initially created at a polarization of zero! Instead, in QM the probability of passing through the first polarizer C, whatever its angle, would just be 1/2 (which happens to be equal to cos^2(45), but my point is that this figure of 1/2 has nothing to do with taking the cosine squared of the angle of C). If it does pass through C at 45 degrees, the probability it will then also pass through B at 90 is cos^2(90-45), so the total reduction in intensity is I(max)*(1/2)*cos^2(90-45). In general if C is at c degrees and B is at b degrees, the reduction in intensity is I(max)*(1/2)*cos^2(b-c)
miosim said:
To comply with Malus’ law particle #2, as I understand, must change its polarization, so the correlated photons #1 and #2 will have different polarizations. This isn’t a problem for EPR model but is prohibited for Bom’s entangled photons that must have identical polarization.
Bohm's model doesn't say the particles must continue to have identical polarizations after multiple measurements of each! Its statistical predictions are the same as ordinary QM, only the first measurement of each will be correlated.
miosim said:
So what is your prediction for the experiment I described in the previous post and repeated below:

Let’s test this assumption using Aspect’s experimental setup.
1). First let's fully align polarizers A and B and observe a maximum (say 100%) correlation.
2). Set polarizer A and B at 90 deg and observe zero correlation.
3). Let’s add one more polarizers (C) between polarizer B and the source of photon and set this polarizers at intermediate angle of 45 deg and monitor correlated photons.
Are you just asking what the QM prediction would be here, as opposed to your nonsensical statements about "the EPR model" vs. "the Bell concept"? The QM prediction would be that the probability particle #1 makes it through A at 90 degrees and particle #2 makes it through C at 45 is given by (1/2)*cos^2(90-45), then the probability that particle #2 makes it through B at 90 if it already made it through C at 45 is cos^2(90-45), so the total probability that both particles are detected passing through the polarizers is (1/2)*cos^2(90-45)*cos^2(90-45) = (1/2)*(1/2)*(1/2) = 1/8. And the more general QM answer would be (1/2)*cos^2(a-c)*cos^2(b-c).
 
Last edited:
  • #283
JesseM said:
... Einstein says nothing here about the systems having separate wave functions before measurement, but according to the QM rules, after one entangled particle is measured you can have two independent wave functions for the two particles, see this textbook for example:

“…The particle undergoes a dramatic change of physical state in the process of measurement. It converts from the entanglement with its distant partner into the disentangled state of its own. In the former state the particle, even though separated from its partner by the vastness of space, did not have its full identity totally independent of the partner's. Their identities remained intimately shared. In the final state, each particle has its own full identity and can be described by a wave function of its own, independent of the rest of the world….”

It seems to me that this textbook reflects the views that Einstein opposed and called “spooky actions at a distance.” As I understand, Einstein in the EPR paper expressed disagreement with the “the entanglement with its distant partner” by stating that
” … two systems I and II, which we permit to interact from the time t=0 to t=T, after which time we suppose that there is no longer any interaction between the two parts…”
Therefore after separation, according to EPR paper (the way I understand it), the correlated QM systems can’t share the same wave function/packet and the only choice is to admit that both independent systems have two independents wave functions/packets instead. As I understand, the EPR interpretation has one more important distinction from the orthodox QM interpretation. According to EPR each individual QM system has all their parameters already determined prior to act of measurement. The measurement, at later time, just allows us to learn about these parameters.
JesseM said:
Are you just asking what the QM prediction would be here, as opposed to your nonsensical statements about "the EPR model" vs. "the Bell concept"? The QM prediction would be that the probability particle #1 makes it through A at 90 degrees and particle #2 makes it through C at 45 is given by (1/2)*cos^2(90-45), then the probability that particle #2 makes it through B at 90 if it already made it through C at 45 is cos^2(90-45), so the total probability that both particles are detected passing through the polarizers is (1/2)*cos^2(90-45)*cos^2(90-45) = (1/2)*(1/2)*(1/2) = 1/8. And the more general answer would be (1/2)*cos^2(a-c)*cos^2(b-c).
Question:

Would the entangled photons #1 and #2 after passing their respective pololarizers (A, B and C) have identical or different polarization?
 
  • #284
DrChinese said:
They didn't supply this value as I read it either. Since they define entangled as timetags within 6 ns, everything else is ignored. As the time window is increased, you get a lower value of S because a few unentangled* photon pairs are being considered.

*This may seem surprising, but pairs can be partially entangled. Anywhere between 0 and 100% fidelity, actually.

Are there other experiments with the N(a) and N(b) a little lower so you would "notice" (or not) the extra unentangled photons? Not providing those figures seems odd.
 
  • #285
DrChinese said:
Are you back to invisible photons? Those won't enter into any experimental statistics anywhere. Or?

Table I from the experiment you quoted http://arxiv.org/PS_cache/quant-ph/pdf/0205/0205171v1.pdf" provides a sample run (sorry, the copy and paste only works so well). Fundimentally you can see the runs are averaging 85000 hits on each detector with quite a range. Remember they are claiming the the photons that are falling off the "coincidence" column (second last, all under 1000) are not adding to the third column. In fact, the runs themselves are varying by much larger then 1000 hits when they are searching for something in the 100's? Dont they have some explaining to do?

NA NB N NAc.
-45◦ -22.5◦ 84525 80356 842 10.0
-45◦ 22.5◦ 84607 82853 212 10.3
-45◦ 67.5◦ 83874 82179 302 10.1
-45◦ 112.5◦ 83769 77720 836 9.5
0◦ -22.5◦ 87015 80948 891 10.3
0◦ 22.5◦ 86674 83187 869 10.6
0◦ 67.5◦ 87086 81846 173 10.5
0◦ 112.5◦ 86745 77700 261 9.9
45◦ -22.5◦ 87782 80385 255 10.3
45◦ 22.5◦ 87932 83265 830 10.7
45◦ 67.5◦ 87794 81824 814 10.5
45◦ 112.5◦ 88023 77862 221 10.1
90◦ -22.5◦ 88416 80941 170 10.5
90◦ 22.5◦ 88285 82924 259 10.7
90◦ 67.5◦ 88383 81435 969 10.6
90◦ 112.5◦ 88226 77805 846 10.1
TABLE I: Singles (NA,NB) and coincidence (N) detections
as a function of polarizer angles , . The acquisition window
was T = 15 seconds, irises were fully open. Also shown are
“accidental” coincidences (NAc. = NANB/T) assuming a
coincidence window of  = 25 ns.
 
Last edited by a moderator:
  • #286
miosim said:
It seems to me that this textbook reflects the views that Einstein opposed and called “spooky actions at a distance.”
The textbook is simply discussing the quantum formalism involving wave functions, and Einstein's understanding of how that formalism is applied was no different from any other physicist's. What Einstein was hoping for was a new model with a new formalism, that would be clearly local and would not involve the notion of quantum "wave functions" at all, but which would make the same predictions about observable measurement outcomes as the quantum formalism.
miosim said:
As I understand, Einstein in the EPR paper expressed disagreement with the “the entanglement with its distant partner” by stating that
” … two systems I and II, which we permit to interact from the time t=0 to t=T, after which time we suppose that there is no longer any interaction between the two parts…”
Therefore after separation, according to EPR paper (the way I understand it), the correlated QM systems can’t share the same wave function/packet
No, the EPR paper is talking about what they think would really be true in a complete physical description, not what is true in the "wave function" description of QM which the paper argues is incomplete. The whole point of the discussion of "elements of physical reality" was to try to argue that the QM wave function cannot be a complete description since it does not describe an element of physical reality they feel must be present at the location of the second particle.
miosim said:
As I understand, the EPR interpretation has one more important distinction from the orthodox QM interpretation. According to EPR each individual QM system has all their parameters already determined prior to act of measurement. The measurement, at later time, just allows us to learn about these parameters.
Yes, this part is correct, and if you assume this is true for the parameter of whether it will pass through or be reflected by polarizers at three possible angles, it's easy to derive a Bell inequality from this. Let A be the property of passing through a polarizer at angle a (while not-A would be the property of not passing through), B the property of passing through a polarizer at angle b (or not-B for not passing through), and C be the property of passing through a polarizer at angle c (not-C for not passing through). Then before being measured, each particle must either have or not have each of these properties, so if we could somehow know the values of these properties for a large series of particles, some particles might have (A, not-B, C) while others might have (not-A, B, not-C) and so forth. Then just look at the simple inequality discussed on this page:
The result of the proof will be that for any collection of objects with three different parameters, A, B and C:

The number of objects which have parameter A but not parameter B plus the number of objects which have parameter B but not parameter C is greater than or equal to the number of objects which have parameter A but not parameter C.

We can write this more compactly as:

Number(A, not B) + Number(B, not C) greater than or equal to Number(A, not C)

The relationship is called Bell's inequality.

In class I often make the students the collection of objects and choose the parameters to be:

A: male B: height over 5' 8" (173 cm) C: blue eyes

Then the inequality becomes that the number of men students who do not have a height over 5' 8" plus the number of students, male and female, with a height over 5' 8" but who do not have blue eyes is greater than or equal to the number of men students who do not have blue eyes. I absolutely guarantee that for any collection of people this will turn out to be true.

It is important to stress that we are not making any statistical assumption: the class can be big, small or even zero size. Also, we are not assuming that the parameters are independent: note that there tends to be a correlation between gender and height.

Sometimes people have trouble with the theorem because we will be doing a variation of a technique called proof by negation. For example, here is a syllogism:

All spiders have six legs. All six legged creatures have wings. Therefore all spiders have wings

If we ever observe a spider that does not have wings, then we know that at least one and possibly both of the assumptions of the syllogism are incorrect. Similarly, we will derive the inequality and then show an experimental circumstance where it is not true. Thus we will know that at least one of the assumptions we used in the derivation is wrong.

Also, we will see that the proof and its experimental tests have absolutely nothing to do with Quantum Mechanics.

Now we are ready for the proof itself. First, I assert that:

Number(A, not B, C) + Number(not A, B, not C) must be either 0 or a positive integer

or equivalently:

Number(A, not B, C) + Number(not A, B, not C) greater than or equal to 0

This should be pretty obvious, since either no members of the group have these combinations of properties or some members do.

Now we add Number(A, not B, not C) + Number(A, B, not C) to the above expression. The left hand side is:

Number(A, not B, C) + Number(A, not B, not C) + Number(not A, B, not C) + Number(A, B, not C)

and the right hand side is:

0 + Number(A, not B, not C) + Number(A, B, not C)

But this right hand side is just:

Number(A, not C)

since for all members either B or not B must be true. In the classroom example above, when we counted the number of men without blue eyes we include both those whose height was over 5' 8" and those whose height was not over 5' 8".

Above we wrote "since for all members either B or not B must be true." This will turn out to be important.

We can similarly collect terms and write the left hand side as:

Number(A, not B) + Number(B, not C)

Since we started the proof by asserting that the left hand side is greater than or equal to the right hand side, we have proved the inequality, which I re-state:

Number(A, not B) + Number(B, not C) greater than or equal to Number(A, not C)
Please look over this and tell me whether you agree or disagree with the inequality Number(A, not B) + Number(B, not C) greater than or equal to Number(A, not C). If you're not sure because you don't understand some line of the proof, point out which is the first line you have trouble with.
miosim said:
Question:

Would the entangled photons #1 and #2 after passing their respective pololarizers (A, B and C) have identical or different polarization?
Your question is ambiguous because in quantum mechanics particles cannot have definite polarizations at all possible angles. In your experiment, the photons are no longer entangled after they have passed through the polarizers, so they are no longer guaranteed to give identical results at all angles. But the last polarizer each one passed through was at 90 degrees, so they would both be guaranteed to pass through another polarizer at 90 degrees, and likewise both be guaranteed not to pass through another polarizer at 0 or 180 degrees. On the other hand, if the next polarizer each encountered was at some different angle like 80 degrees or 45 degrees, it might be that one would pass through but the other would be reflected.
 
  • #287
edguy99 said:
Are there other experiments with the N(a) and N(b) a little lower so you would "notice" (or not) the extra unentangled photons?

Sure, there are a bunch. Keep in mind that the tester wants a source of ENTANGLED pairs so that those can be analyzed. Here is one which was done on ions instead of photons, and all events are considered. This means a lower S value, but it is still greater than 2 - which is the Local Realistic max.

http://www.nature.com/nature/journal/v409/n6822/full/409791a0.html

"If we take into account the imperfections of our experiment (imperfect state fidelity, manipulations, and detection), this value agrees with the prediction of quantum mechanics.

The result above was obtained using the outcomes of every experiment, so that no fair-sampling hypothesis is required. In this case, the issue of detection efficiency is replaced by detection accuracy. The dominant cause of inaccuracy in our state detection comes from the bright state becoming dark because of optical pumping effects. For example, imperfect circular polarization of the detection light allows an ion in the |↓right fence state to be pumped to |↑right fence, resulting in fewer collected photons from a bright ion. Because of such errors, a bright ion is misidentified 2% of the time as being dark. This imperfect detection accuracy decreases the magnitude of the measured correlations. We estimate that our Bell's signal would be 2.37 with perfect detection accuracy.

We have thus presented experimental results of a Bell's inequality measurement where a measurement outcome was recorded for every experiment. Our detection efficiency was high enough for a Bell's inequality to be violated without requiring the assumption of fair sampling, thereby closing the detection loophole in this experiment."
 
  • #288
miosim said:
Einstein didn’t know that his concept could be transformed into a circus.

According to EPR argument the two correlated particles are represented by the two different and independent wave functions. When the first wave function collapses it reviled one complemented parameter (+spin) that gaves us a knowledge about another complemented parameter (-spin) of the second wave function. Because this wave functions has no description of this parameter the wave function and QM accordingly is incomplete.

Now let see the Bell’s ‘reasonable’ reproduction of this EPR model:

“…Let us illustrate the possibility of what Einstein had in mind in the context of the particular quantum mechanical predictions already cited for the EPRB gedanken experiment. These predictions make it hard to believe in the completeness of quantum formalism…”
Then Bell ‘mumbles’ the following:
“…But of course outside that formalism they make no difficulty whatever for the notion of local causality. To show this explicitly we exhibit a trivial ad hoc space-time picture of what might go on. It is a modification of the naive classical picture already described. Certainly something must be modified in that, to reproduce the quantum phenomena. Previously, we implicitly assumed for the net force in the direction of the field gradient (which we always take to be in the same direction as the field) a form: F cos Q ….”

This is it. These are all efforts to recreate the EPR model in spirit of Einstein. Based on these ‘exhaustive’ efforts, Bell proclaimed that it isn’t possible to build such a model.
Is this hilarious? Is this a circus?

Bell (and his supporters) just forgot that the EPR particles are represented by the two independent wave functions and therefore their cos^2 behavior are identical to Bell’s QM model.

Secondly, if Bell decided to model EPR particles as classical ones, he must at least include interactions of these particles with polarizers (QM formalism has this interactions builtin) as follows: the polarizers, like optical ‘funnel’, modifies polarization of both photons in the direction of higher correlation and this way eliminating inequality with the QM prediction.

It seems to me that the Bell’s theorem is dead.

I agree with this post and would like to add another quote that was just brought to my attention. From the experiment quoted http://arxiv.org/PS_cache/quant-ph/pdf/9810/9810080v1.pdf"

Yet we agree with John Bell that ”. . . it is hard for me to believe that quantum mechanics works so nicely for inefficient practical set-ups and is yet going to fail badly when sufficient refinements are made. Of more importance, in my opinion, is the complete absence of the vital time factor in existing experiments. The analyzers are not rotated during the flight of the particles.”

He is talking about Bob rotating his measuring device and Alice "instantly" seeing a change.

In my opinion, a far better explanation has been quoted earlier "... the two events are separated by a "space-like" interval; not a "time-like" interval, so effectively, in half of the reference frames a is before b, and in another half of the reference frames b is before a, and, of course, in some specifically defined reference frames, the two events are simultaneous. "
 
Last edited by a moderator:
  • #289
edguy99 said:
In my opinion, a far better explanation has been quoted earlier "... the two events are separated by a "space-like" interval; not a "time-like" interval, so effectively, in half of the reference frames a is before b, and in another half of the reference frames b is before a, and, of course, in some specifically defined reference frames, the two events are simultaneous. "

For the cited experiments, there are no reference frames in which Alice's selection of settings and Bob's selection of settings are within the same light cone regardless of direction of causality. So what you are implying is incorrect. Not that it would matter for a local realist, as realism is ejected immediately if you assert that locality holds.
 
  • #290
DrChinese said:
Sure, there are a bunch. Keep in mind that the tester wants a source of ENTANGLED pairs so that those can be analyzed. Here is one which was done on ions instead of photons, and all events are considered. This means a lower S value, but it is still greater than 2 - which is the Local Realistic max..."

Are there any using photons? If you were modeling an ion compared to a photon, you would certainly want to consider spin, but the model of the ions vs photons would have many other differences and switching to ions raises more issues then it solves in my opinion.
 
  • #291
edguy99 said:
I agree with this post
Did you read my response? Most of miosim's comments there are completely confused.
edguy99 said:
and would like to add another quote that was just brought to my attention. From the experiment quoted http://arxiv.org/PS_cache/quant-ph/pdf/9810/9810080v1.pdf"

Yet we agree with John Bell that ”. . . it is hard for me to believe that quantum mechanics works so nicely for inefficient practical set-ups and is yet going to fail badly when sufficient refinements are made. Of more importance, in my opinion, is the complete absence of the vital time factor in existing experiments. The analyzers are not rotated during the flight of the particles.”

He is talking about Bob rotating his measuring device and Alice "instantly" seeing a change.
No he isn't, that would imply Bob could send a message to Alice faster than light, which isn't possible in QM. Bell's comment about rotating devices is to suggest that the settings need to be chosen after the source has already emitted the particles and they are in "flight", since if the settings were chosen beforehand, some kind of hidden signal could travel from the devices to the source so that it could use that information to decide the hidden variables of the particles and violate Bell's inequality without violating locality. Only if the device settings are chosen after the particles have been emitted can you rule out a local explanation for violations of Bell inequalities. I think this issue has been resolved with later experiments, for example the Bell test loophole wiki article section on the locality loophole says "Weihs et al. improved on this with a distance on the order of a few hundred meters in their experiment in addition to using random settings retrieved from a quantum system. Scheidl et.al. (2010) improved on this further by conducting an experiment between locations separated by a distance of 144 km."
edguy99 said:
In my opinion, a far better explanation has been quoted earlier "... the two events are separated by a "space-like" interval; not a "time-like" interval, so effectively, in half of the reference frames a is before b, and in another half of the reference frames b is before a, and, of course, in some specifically defined reference frames, the two events are simultaneous. "
This is part of how Bell experiments are supposed to be performed, but why do you call this a "far better explanation"? Explanation for what? The very fact that the measurements are conducted at a spacelike interval is essential to why it is impossible for any local realistic theory to reproduce the violations of Bell inequalities predicted by QM.
 
Last edited by a moderator:
  • #292
edguy99 said:
Are there any using photons? If you were modeling an ion compared to a photon, you would certainly want to consider spin, but the model of the ions vs photons would have many other differences and switching to ions raises more issues then it solves in my opinion.

Try:
http://arxiv.org/abs/quant-ph/0303018

I think what you are saying is really: I want an experiment done in Ireland on a rainy Tuesday. I mean, a simple read will tell you why a Bell test is a Bell test is a Bell test. They ALL SAY THE SAME THING. S>2 to X standard deviations, or similar. The one cited above has much greater source fidelity, so local realism is ruled out by over 213 standard deviations.

So, perhaps you should consider searching for yourself. Here is a good starting point:

http://arxiv.org/find/all/1/abs:+AND+experimental+AND+bell+photon/0/1/0/all/0/1

Or, on a more humorous note: Try this :smile:
 
  • #293
DrChinese said:
Try:
http://arxiv.org/abs/quant-ph/0303018

I think what you are saying is really: I want an experiment done in Ireland on a rainy Tuesday. I mean, a simple read will tell you why a Bell test is a Bell test is a Bell test. They ALL SAY THE SAME THING. S>2 to X standard deviations, or similar. The one cited above has much greater source fidelity, so local realism is ruled out by over 213 standard deviations. ...

I think the accuracy they are quoting is the ability to detect pairs within a lot of hits, as they must measure say 100 to 800 pairs in a total of 85000 hits on each side in the prior experiment. If you note, being able to detect 100 to 800 pairs in a total of 170000 hits would be consider "more" accurate in their sense of the word.

In the context of what we are talking about, the more hits on the detectors, especially relative to pairs, the more inaccurate the measurement.
 
  • #294
JesseM said:
Please look over this and tell me whether you agree or disagree with the inequality Number(A, not B) + Number(B, not C) greater than or equal to Number(A, not C). If you're not sure because you don't understand some line of the proof, point out which is the first line you have trouble with.
I don’t want to repeat my arguments against careless application of math or formal logic to the complex physical problems without carefully analyzing initial (physical) conditions. Instead, if you don’t mind, I would like to focus on an experiment that may falsify the concept on non-locality.

As I understand, a photon that passed the polarizer may change its polarization as result of photon/polarizer interaction. Let’s assume that we have a pair of entangled photons. Using rotating polarizer we are changing the polarization of photons on one side while monitoring the polarization of entangled photons on other side. If the second entangled photon follows the polarization of the first photon we may claim that the non-local interaction exists. Is it true?
 
  • #295
miosim said:
I don’t want to repeat my arguments against careless application of math or formal logic to the complex physical problems without carefully analyzing initial (physical) conditions.
But you agreed that according to EPR, the particles must have predetermined results for all possible measurements. It's a simple matter to show that from this assumption, the inequality I mentioned follows. Of course the inequality is only telling you that out of all the particle pairs it's true that Number(A, not B) + Number(B, not C) ≥ Number(A, not C), a small amount of additional reasoning is needed to show this implies the experimental inequality:

[of the subset of all particle pairs where #1 was measured at angle a and #2 was measured at angle b, the number in this subset where particle #1 had property A and particle #2 had property not-B]

+

[of the subset of all particle pairs where #1 was measured at angle b and #2 was measured at angle c, the number in this subset where particle #1 had property B and particle #2 had property not-C]

greater than or equal to

[of the subset of all particle pairs where #1 was measured at angle a and #2 was measured at angle c, the number in this subset where particle #1 had property A and particle #2 had property not-C]

Actually the reasoning from going from the first inequality to this one is fairly simple, it just involves the idea that the source doesn't have any "precognition" about what settings the experimenters will choose when it creates a pair of particles with a given set of properties. But I'm not even asking about this experimental inequality now. I just want to know whether you agree that, if EPR are correct that every particle pair must have an identical set of predetermined measurement results like [A, not-B, not-C] or [A, B, not-C], then if N particle pairs were created and some imaginary omniscient being knew the full set of hidden properties for each one, that imaginary omniscient being would necessarily see that this inequality is satisfied for the total collection of N particle pairs:

Number(A, not B) + Number(B, not C) ≥ Number(A, not C)

If you don't agree, do you think that given enough time you could find a list that violates it? For example, here is a list of 10 particle pairs, with the full set of properties seen by the omniscient being listed alongside each one:

pair #1: [A, B, C]
pair #2: [A, not-B, not-C]
pair #3: [not-A, B, not-C]
pair #4: [A, B, not-C]
pair #5: [A, not-B, C]
pair #6: [not-A, not-B, C]
pair #7: [A, B, not-C]
pair #8: [not-A, not-B, not-C]
pair #9: [A, B, C]
pair #10: [A, not-B, not-C]

Here we can see that Number(A, not B)=3, Number(B, not C)=3, and Number(A, not C)=4, so the inequality is satisfied. Again, if you disagree or doubt that claim that the inequality is always satisfied under the assumption each particle has definite predetermined measurement results for all three settings, then you should try to back that up with a counter-example.
miosim said:
Instead, if you don’t mind, I would like to focus on an experiment that may falsify the concept on non-locality.
Oh but I do mind, my question is a very simple one and if you are remotely sincere about trying to understand the Bell/EPR argument, as opposed to just making a lawyer-like rhetorical case against Bell, then you should have no problem answering this. If you refuse to answer this simple question I'll conclude you have no intellectual integrity and are just trying to "win" the argument at all costs, in which case there is no point to further discussion.
miosim said:
As I understand, a photon that passed the polarizer may change its polarization as result of photon/polarizer interaction.
As I said the phrase "its polarization" doesn't have a clear meaning in QM, since for most angles the quantum state just gives you probabilities that the particle will pass through the polarizer.
miosim said:
Let’s assume that we have a pair of entangled photons. Using rotating polarizer we are changing the polarization of photons on one side while monitoring the polarization of entangled photons on other side. If the second entangled photon follows the polarization of the first photon we may claim that the non-local interaction exists. Is it true?
No, I already told you several times that the first measurement of each particle breaks the entanglement, after that the two photons are no more correlated than two non-entangled photons which happened to give the same two results to those first measurements.
 
  • #296
JesseM said:
But you agreed that according to EPR, the particles must have predetermined results for all possible measurements. It's a simple matter to show that from this assumption, the inequality I mentioned follows.
No I don’t agree with this interpretation of the EPR concept. The EPR concept doesn’t claim that the ‘hidden parameters” are deterministic (and Bell understood that). They could be for example combination of a deterministic component and stochastic processes so the final result isn’t absolutely determined. Einstein didn’t collaborate about nature of these variables/processes but just provided an argument in favor of their existence. The criteria for searching these parameters is such that they should provide the realistic description in full agreement with the established formalism of QM. Therefore if Bell built his inequity based on the differences in behavior between EPR and traditional QM particle his inequities are invalid by definition. If Bell would at least provide an adequate justification for the violation (simplification) of EPR concept, I would accept his view at least as a reasonable hypothesis.
JesseM said:
my question is a very simple one and if you are remotely sincere about trying to understand the Bell/EPR argument, as opposed to just making a lawyer-like rhetorical case against Bell, then you should have no problem answering this. If you refuse to answer this simple question I'll conclude you have no intellectual integrity and are just trying to "win" the argument at all costs, in which case there is no point to further discussion.

I understand Bell’s mathematical formalism that led him to his inequality (“Beltmann socks …” pages C2-48 through C2-52). However I have a hard time forcing my self to study in more details the additional reasoning of his ‘torturous’ (for me) logic that led him to conclusion I refuse to accept because of incorrect initial conditions. However because the Bell’s inequalities are the large part of today conversation within physics I will spend some time this week to study them in more details (including your post). I will be on the road most of this week so I will not be able to provide timely responce.
 
  • #297
miosim said:
No I don’t agree with this interpretation of the EPR concept. The EPR concept doesn’t claim that the ‘hidden parameters” are deterministic (and Bell understood that).
Uh didn't you say the exact opposite earlier when you said "According to EPR each individual QM system has all their parameters already determined prior to act of measurement"? Anyway I'm not sure what you mean by "parameters", if you're talking about the values of all hidden variables (which might be arbitrarily complex) or simply the predetermined facts about what result the particle will give if measured with any particular detector setting. I wasn't talking about the hidden variables, I was talking specifically about the predetermined results which are determined by those variables. For example it's possible that the variables fluctuate in a partially random way, but nevertheless at any time prior to measurement if you had complete knowledge of the hidden variables (along with any observable variables prior to measurement) you would be able to predict with certainty what result would be found if the particle was measured at setting A or B or C. If this wasn't true there would be no way (in a local realist universe respecting the no-conspiracy condition) to explain the fact that, whenever the particles are measured with the same detector setting, we are guaranteed to get identical (or opposite) results, even when there is a spacelike separation between the two measurements and choices of settings, so that according to local realism one experimenter's choice of setting cannot possibly have had a causal influence on the other experimenter or the other particle.

If still don't see why this fact of guaranteed identical results when the same setting is chosen implies predetermined answers to all possible settings (under the assumption of local realism), then we should really focus on this issue. I could try to explain it using analogies like the game show analogy [post=3290921]here[/post], though I know you have said in the past you don't like analogies. If you want a more rigorous derivation I would use an argument involving light cones as Bell did in the paper I linked to and discussed in [post=3248153]this post[/post]...so firstly, are you familiar with the concept of a "light cone" in special relativity, and why under local realism events can only be causally influenced by other events in their past light cone? If you're not familiar with this concept, this page might be a good place to start.
 
  • #298
JesseM said:
Uh didn't you say the exact opposite earlier when you said "According to EPR each individual QM system has all their parameters already determined prior to act of measurement"? Anyway I'm not sure what you mean by "parameters", if you're talking about the values of all hidden variables (which might be arbitrarily complex) or simply the predetermined facts about what result the particle will give if measured with any particular detector setting …
The way I interpret the EPR argument, we can predict in advance the photon’s polarization as a parameter, however (as I mentioned before) not necessarily as deterministic argument but in terms of probability instead. For example, the photon’s polarization may fluctuate around a specific value causing the result of measurement to be probabilistic.
It seems to me that during interaction with polarizer, a photon’s polarization is rotated to align with the polarizer if their angles are close enough. It is why regardless that correlated photons may have slightly different polarizations, after interacting with their respective polarizers having identical setting the polarization of these photons will be realigned and the result of the measurement will be close to 100% correlation. However if the polarizers aren’t aligned the result of the measurement is less deterministic and follows cos^2 correlation (instead of linear correlation that, as I understand is associated with the absolutely deterministic outcome).
JesseM said:
…. I wasn't talking about the hidden variables I was talking specifically about the predetermined results which are determined by those variables. For example it's possible that the variables fluctuate in a partially random way, but nevertheless at any time prior to measurement if you had complete knowledge of the hidden variables (along with any observable variables prior to measurement) you would be able to predict with certainty what result would be found if the particle was measured at setting A or B or C. If this wasn't true there would be no way (in a local realist universe respecting the no-conspiracy condition) to explain the fact that, whenever the particles are measured with the same detector setting, we are guaranteed to get identical (or opposite) results, even when there is a spacelike separation between the two measurements and choices of settings…
We can't predict the result of measurement because we don't have a complete knowledge about photon that is randomly changing its parameters. We also don't have a complete knowledge about fluctuations of the polarizer setting. It is why the local realistic EPR model may not include the "complete" knowledge of reality. The QM is “wise” enough to deal with this situation in term of probability by “refusing” to predict individual events. It is why QM is EMPERICALL theory that preidict the statisticaly processed observations but can't to explaine them. EPR model may have the same deficiency in prediction, but at least offers realsitic explanation of events.
JesseM said:
I could try to explain it using analogies like the game show analogy here, though I know you have said in the past you don't like analogies.
I have no problem with analogies if they are adequate to phenomena we try to explain. Good analogy helps, but wrong one leads us further from destination.
JesseM said:
If you want a more rigorous derivation I would use an argument involving light cones as Bell did in the paper I linked to and discussed in this post...so firstly, are you familiar with the concept of a "light cone" in special relativity, and why under local realism events can only be causally influenced by other events in their past light cone? If you're not familiar with this concept, this page might be a good place to start.
I read the link you provided and I understand the concept of a "light cone" in special relativity. I tried to read “La nouvelle cuisine” using the link you provided, but found just a beginning of this paper (ended at page 217). Bell started with an example of instantaneous events:

“...there are things which do go faster than light. British sovereignty is the classical ex-
ample. When the Queen dies in London (long may it be delayed) the Prince of Wales,
lecturing on modern architecture in Australia, becomes instantaneously King... And there are things like that in physics. In Maxwell’s theory … Coulomb gauge the scalar potential propagates with infinite velocity… “

Indeed the fact that “Prince of Wales becomes instantaneously King” and “the Coulomb scalar potential propagates with infinite velocity” are very good analogies between FORMAL LOGIC and MATHEMATICAL ABSTRACTION. However, they have nothing to do with a reality and therefore it would be a bad analogy to reality. I wasn’t able to find the entire paper, but I get feeling that Bell’s views on reality are influenced by this analogy.
miosim said:
As I understand, a photon that passed the polarizer may change its polarization as result of photon/polarizer interaction.
JesseM said:
As I said the phrase "its polarization" doesn't have a clear meaning in QM, since for most angles the quantum state just gives you probabilities that the particle will pass through the polarizer.
However, as I understand, QM describes a photon’s polarization at least in terms of probability. Therefore, could we suggest that a photon interaction with polarizer may shift/rotate the “parameter” that describes the “probability vector” of the photon’s polarization?
miosim said:
Let’s assume that we have a pair of entangled photons. Using rotating polarizer we are changing the polarization of photons on one side while monitoring the polarization of entangled photons on other side. If the second entangled photon follows the polarization of the first photon we may claim that the non-local interaction exists. Is it true?
JesseM said:
No, I already told you several times that the first measurement of each particle breaks the entanglement, after that the two photons are no more correlated than two non-entangled photons which happened to give the same two results to those first measurements.
Does it mean that any interaction between photon and other particles breaks the entanglement? Does photon interaction with molecules of air breaks the entanglement? Did Aspect perform his experiment in a vacuum?
 
  • #299
miosim said:
[...] Does photon interaction with molecules of air breaks the entanglement? Did Aspect perform his experiment in a vacuum?

Good question! I searched and found the -rather recent- answer here:

http://www.esa.int/esaMI/GSP/SEMXM7Q08ZE_0.html
 
  • #300
Regardless of the mechanism, there is a "magic" result in QM that the observation of one of the two entangled particles effectively puts the other one into a pure state as given by that result. Although the initial state of the pair of particles can be described by probabilities, once an observation has been made, the state at the other end is known exactly, and the other particle behaves like one prepared with a precisely known state.

If the observations are separated by a spacelike interval (that is, they are sufficiently far apart in space and close enough in time that there could not be a light-speed signal between them) then the question of which observation is "first" seems to depend on the frame of reference, but the same result is obtained either way.
 

Similar threads

Replies
6
Views
1K
Replies
47
Views
5K
Replies
11
Views
3K
Replies
53
Views
5K
Replies
36
Views
5K
Replies
333
Views
17K
Back
Top