Trying to Understand Bell's reasoning

  • Thread starter billschnieder
  • Start date
In summary, Bell's argument for the violation of his inequalities starts with the idea that according to quantum mechanics, if Alice measures +1 then Bob must measure -1. He then introduces the concept of hidden variables to obtain a more complete state. His ansatz, equation 2 in his paper, correctly represents these local-causal hidden variables and necessarily leads to Bell's inequalities. Experiments have effectively demonstrated that these inequalities are violated, leading to the conclusion that the real physical situation of the experiments is not locally causal. However, there is doubt surrounding statement (1), which represents local reality by stating the joint probability of the outcome at A and B as the product of the individual probabilities at each station. This does not take into account the chain
  • #71
DrChinese said:
Great example, pretty much demonstrates everything I have been saying all along:

a) There is a dataset which is realistic, i.e. you can create a dataset which presents data for properties not actually observed;
Huh?

b) The sample is NOT representative of the full universe, something which is not a problem with Bell since it assumes the full universe in its thinking; i.e. your example is irrelevant
Look at Bell's equation (15), he writes

1 + P(b,c) >= | P(a,b) - P(a,c)|

Do you see the cyclicity I mentioned in my previous post? Bell Assumes that the b in P(b,c) is the same b in P(a,b), and the a in P(a,b) is the same a in P(a,c) and the c in P(a,c) is the same c in P(b,c). The inequalities will fail if these assumptions do not hold. One way to avoid these would have been to start the equations using P(AB|H) = P(A|H)P(B|AH), the term P(B|AH) reminds us not to confuse P(B|AH) and P(B|CH).

Now fast forward to Aspect type experiments, each pair of photons is analysed under different circumstances, therefore for each iteration, you need to index the data according to at least factors such as, time of measurement, local hidden variables of measuring instrument, local hidden variables specific to each photon of the pair, NOT just the angles as Bell did. Adding just one of these parameters breaks the cyclicity. So it is very clear to anyone , that Bell's inequalities as derived only works for data that has been indexed to preserve the cyclicity. Sure this proves that the fair sampling assumption is not valid unless steps have been taken to ensure a fair sampling. But it is impossible to do that, as it will require knowledge of all hidden variables at play in order to design the experiment. The failure of the fair sampling assumption is just a symptom of a more serious issue with Bell's ansatz.

c) By way of comparison to Bell, it does not agree with the predictions of QM; i.e. obviously QM does not say anything about doctors and patients in your example.
I assume you know about the Bell-type Leggett-Garg inequality (LGI). The doctors and patients example violates the LGI and so does QM. Remember that violation of LGI is supposed to prove that realism is false even at the macro realm. Using the LGI, which is a Bell-type inequality in this example is proper and relevant to illustrate the problem with Bell's inequalities.
 
Physics news on Phys.org
  • #72
billschnieder said:
1. Huh?

2. Now fast forward to Aspect type experiments, each pair of photons is analysed under different circumstances, therefore for each iteration, you need to index the data according to at least factors such as, time of measurement, local hidden variables of measuring instrument, local hidden variables specific to each photon of the pair, NOT just the angles as Bell did. Adding just one of these parameters breaks the cyclicity. So it is very clear to anyone , that Bell's inequalities as derived only works for data that has been indexed to preserve the cyclicity. Sure this proves that the fair sampling assumption is not valid unless steps have been taken to ensure a fair sampling. But it is impossible to do that, as it will require knowledge of all hidden variables at play in order to design the experiment. The failure of the fair sampling assumption is just a symptom of a more serious issue with Bell's ansatz.

1. My point is: in your example, you are presenting a specific dataset. It satisfies realism. I am asking you to present a dataset for angle settings 0, 120, 240 degrees. Once you present it, it will be clear that the expected QM relationship does not hold. Once you acknowledge that, you have accepted what Bell has told us. It isn't that hard.

2. I assume you are not aware that there have been, in fact, experiments (such as Rowe) in which no sampling is required (essentially 100% detection). The entire dataset is sampled. Of course, you do not need to "prove" that the experimenter has obtained an unbiased dataset of ALL POSSIBLE events in the universe (i.e. for all time) unless you are changing scientific standards. By such logic (if you are asserting that all experiments are subsets of a larger universe of events and are not representative), all experimental proof would be considered invalid.

Meanwhile, do you doubt that there is ever a day in which these results would not be repeated? While in your example, the results repeat occasionally. If you don't pick the right cyclic combination of events, you won't get your result.

On the other hand, if you say your example proves a breakdown of Bell's logic: Again, you have missed the point of Bell entirely. Go back and re-read 1 above.
 
  • #73
Very interesting posts!

The answer to why Bell chose P(AB|H) = P(A|H)P(B|H) may may lie in the fact that Bell’s gedanken experiment is completely random, that is, 50% of the time the lights flash the same color and 50% time a different color. Does the randomness screen off the dependence on H?

Below are my thoughts on the experiment.

In a completely random experiment (no correlations) there are 36 possible outcomes. To summarize:

1) Six are same color-same switch
2) Six are different color-same switch
3) Twelve are same color-different switch
4) Twelve are different color-different switch

From above (2), six are different colors when the switches are the same (theta=0). They are: 11RG, 11GR, 22RG, 22GR, 33RG, and 33GR. In order to match Bell’s gedanken experiment these must be converted by the correlation process to the same color.

(6)(cos 0) = (6)(1) =6, a 100% conversion

When added to the runs (1) that are same color-same switch gives twelve total or 12/36 or 1/3 of the runs will have the same switch setting and the same color.

To conserve the random behavior of the gedanken experiment another opposite correlation must occur where exactly six runs of same color but different switch settings are converted to a different color. There are twelve of these runs: 12RR, 12GG, 21RR, 21GG, 13RR, 13GG, 31RR, 31GG, 23RR, 23GG, 32RR, and 32GG. Therefore, on an average six of these must be converted to a different color. This now leaves 6/24 runs that have same color but different switches.

(12)(cos 120) (12)(.5) = 6, a 50% conversion

This produces the randomness of the experiment where 50% of the time all runs will flash the same color, yet when the switches have the same setting then 100% of the time the lights flash the same color. That is:

12/36x12/12 + 24/36x6/24 = ½

Do the opposite correlations described above cancel the dependence on H and explain the choice of the equation

P(AB|H) = P(A|H)P(B|H).
 
  • #74
DrChinese said:
1. My point is: in your example, you are presenting a specific dataset. It satisfies realism. I am asking you to present a dataset for angle settings 0, 120, 240 degrees. Once you present it, it will be clear that the expected QM relationship does not hold. Once you acknowledge that, you have accepted what Bell has told us. It isn't that hard.

2. I assume you are not aware that there have been, in fact, experiments (such as Rowe) in which no sampling is required (essentially 100% detection). The entire dataset is sampled. Of course, you do not need to "prove" that the experimenter has obtained an unbiased dataset of ALL POSSIBLE events in the universe (i.e. for all time) unless you are changing scientific standards. By such logic (if you are asserting that all experiments are subsets of a larger universe of events and are not representative), all experimental proof would be considered invalid.

Meanwhile, do you doubt that there is ever a day in which these results would not be repeated? While in your example, the results repeat occasionally. If you don't pick the right cyclic combination of events, you won't get your result.

On the other hand, if you say your example proves a breakdown of Bell's logic: Again, you have missed the point of Bell entirely. Go back and re-read 1 above.

I don't think you have understood my critique. It's hard to figure out what you are talking about because it is not at all relevant to what I have been discussing. I have explained why it is IMPOSSIBLE to perform an experiment comparable to Bell's inequality. The critique is valid even if no experiment is ever performed. So I don't see the point of bringing up Rowe because, Rowe can not do the impossible. Rowe does not know the nature and behaviour of ALL hidden variables at play, so it is IMPOSSIBLE for him to have preserved the cyclicity. Detection efficiency is irrelevant to this discussion. I already mentioned in post #1 that we can assume that there is no loophole in the experiment. The issue discussed here is not a problem with experiments but with the formulation used in deriving the inequalities.

You ask me to provide a dataset for three angles. It doesn't take much to convert the doctors example into one with photons. Make a,b,c equivalent to the angles, 1,2,3 equivalent to the stations and n equivalent to the iteration of the experiment. Since in Aspect type experiments, only two photons are ever analysed per iteration, the situation is similar to the one in which only two doctors are involved. You get exactly the same results. I don't know what other dataset you are asking for.
 
  • #75
rlduncan said:
Very interesting posts!

The answer to why Bell chose P(AB|H) = P(A|H)P(B|H) may may lie in the fact that Bell’s gedanken experiment is completely random, that is, 50% of the time the lights flash the same color and 50% time a different color. Does the randomness screen off the dependence on H?
You bring up an interesting point. IFF the hidden variables are completely randomly distributed in space-time, it may be possible to rescue Bell's formulation. But do you think that is a reasonable assumption for quantum particles? Assume for a moment that there is a space-time harmonic hidden variable for photons, with an unknown phase and frequency. Can you device an algorithm to enable you to sample the hidden variable such that the resulting data is random, without any knowledge that the signal is harmonic or knowledge of the phase or frequency?
 
  • #76
It would be easy to simulate Bell's experiment by rolling of the dice in which the faces alternate green and red and specific instruction are given when observing the upper most face. These instructions can be realized from my first post.
 
  • #77
billschnieder said:
I don't think you have understood my critique. It's hard to figure out what you are talking about because it is not at all relevant to what I have been discussing. I have explained why it is IMPOSSIBLE to perform an experiment comparable to Bell's inequality. The critique is valid even if no experiment is ever performed. So I don't see the point of bringing up Rowe because, Rowe can not do the impossible. Rowe does not know the nature and behaviour of ALL hidden variables at play, so it is IMPOSSIBLE for him to have preserved the cyclicity. Detection efficiency is irrelevant to this discussion. I already mentioned in post #1 that we can assume that there is no loophole in the experiment. The issue discussed here is not a problem with experiments but with the formulation used in deriving the inequalities.

You ask me to provide a dataset for three angles. It doesn't take much to convert the doctors example into one with photons. Make a,b,c equivalent to the angles, 1,2,3 equivalent to the stations and n equivalent to the iteration of the experiment. Since in Aspect type experiments, only two photons are ever analysed per iteration, the situation is similar to the one in which only two doctors are involved. You get exactly the same results. I don't know what other dataset you are asking for.

You critique is hardly new, and has been thoroughly rejected. You do everything possible to ignore dealing directly with the issues that make Bell important. Still. Yes, I agree that every experiment ever performed by any experimenter anywhere may fall victim to the idea of the "periodic, cycle" subset problem. This of course has absolutely nothing to with Bell. Perhaps actually the speed of light is 2c after all! Wow, you have your finger on something. So I say you are wasting everyone's time if your assertion is that tests like Rowe are invalid for the reason: they too are subsets and fall victim to a hidden bias. That would not be fair science. You are basically saying: evidence is NOT evidence.

The problem with this logic is that it STILL means that QM is incompatible with local realism. As has been pointed out by scientists everywhere, maybe QM is wrong (however unlikely). That STILL does not change Bell. Do you follow any of this? Because you seem like a bright, fairly well read person.

If you want to debate Bell test, which would be fine, you need to start by acknowledging what Bell itself says. Then work from there. Clearly, no local realistic theory will be compatible with QM and QM is well supported. There are many, such as Hess (I believe you referenced him earlier) who attack Bell tests. They occasionally attack Bell too. But the only place they have ever gained any real traction is by attacking the Fair Sampling Assumption. However, this assumption acknowledges the validity of Bell. This argument is completely different than the one you assert. Specifically, if the Fair Sampling Assumption is invalid, then QM is in fact WRONG. Bell, however, is still RIGHT.

ON THE OTHER HAND: if you want to debate whether the Fair Sampling Assumption can be modeled into a Bell test: I would happily debate that point. As it happens, I have spent a significant amount of time tearing into the De Raedt model (if you know that). After an extended analysis, I think I have learned the secret to disassembing anything you would care to throw at me. But a couple of points: I will discuss something along the line of a photon test using PDC, but will not discuss doctors in Africa. Let's discuss the issues that make Bell relevant, and that is not hypothetical tests. There are real datasets to discuss. And there are a few more twists to model a local realistic theory these days - since we know from Bell that the predictions of QM will be incorrect.
 
  • #78
billschnieder said:
The point is that certain assumptions are made about the data when deriving the inequalities, that must be valid in the data-taking process. God is not taking the data, so the human experimenters must take those assumptions into account if their data is to be comparable to the inequalities.

Consider a certain disease that strikes persons in different ways depending on circumstances. Assume that we deal with sets of patients born in Africa, Asia and Europe (denoted a,b,c). Assume further that doctors in three cities Lyon, Paris, and Lille (denoted 1,2,3) are are assembling information about the disease. The doctors perform their investigations on randomly chosen but identical days (n) for all three where n = 1,2,3,...,N for a total of N days. The patients are denoted Alo(n) where l is the city, o is the birthplace and n is the day. Each patient is then given a diagnosis of A = +1/-1 based on presence or absence of the disease. So if a patient from Europe examined in Lille on the 10th day of the study was negative, A3c(10) = -1.

According to the Bell-type Leggett-Garg inequality

Aa(.)Ab(.) + Aa(.)Ac(.) + Ab(.)Ac(.) >= -1

In the case under consideration, our doctors can combine their results as follows

A1a(n)A2b(n) + A1a(n)A3c(n) + A2b(n)A3c(n)

It can easily be verified that by combining any possible diagnosis results, the Legett-Garg inequalitiy will not be violated as the result of the above expression will always be >= -1, so long as the cyclicity (XY+XZ+YZ) is maintained. Therefore the average result will also satisfy that inequality and we can therefore drop the indices and write the inequality only based on place of origin as follows:

<AaAb> + <AaAc> + <AbAc> >= -1

Now consider a variation of the study in which only two doctors perform the investigation. The doctor in Lille examines only patients of type (a) and (b) and the doctor in Lyon examines only patients of type (b) and (c). Note that patients of type (b) are examined twice as much. The doctors not knowing, or having any reason to suspect that the date or location of examinations has any influence decide to designate their patients only based on place of origin.

After numerous examinations they combine their results and find that

<AaAb> + <AaAc> + <AbAc> = -3

They also find that the single outcomes Aa, Ab, Ac, appear randomly distributed around +1/-1 and they are completely baffled. How can single outcomes be completely random while the products are not random. After lengthy discussions they conclude that there must be superluminal influence between the two cities.

But there are other more reasonable reasons. Note that by measuring in only two citites they have removed the cyclicity intended in the original inequality. It can easily be verified that the following scenario will result in what they observed:

- on even dates Aa = +1 and Ac = -1 in both cities while Ab = +1 in Lille and Ab = -1 in Lyon
- on odd days all signs are reversed

In the above case
<A1aA2b> + <A1aA2c> + <A1bA2c> >= -3
which is consistent with what they saw. Note that this equation does NOT maintain the cyclicity (XY+XZ+YZ) of the original inequality for the situation in which only two cities are considered and one group of patients is measured more than once. But by droping the indices for the cities, it gives the false impression that the cyclicity is maintained.

The reason for the discrepancy is that the data is not indexed properly in order to provide a data structure that is consistent with the inequalities as derived.Specifically, the inequalities require cyclicity in the data and since experimenters can not possibly know all the factors in play in order to know how to index the data to preserve the cyclicity, it is unreasonable to expect their data to match the inequalities.

For a fuller treatment of this example, see Hess et al, Possible experience: From Boole to Bell. EPL. 87, No 6, 60007(1-6) (2009)
I'm not familiar with the Leggett-Garg inequality, and wikipedia's explanation is not very clear. But I would imagine any derivation of the inequality assumes certain conditions hold, like the experimenters being equally likely to choose any detection setting on each trial perhaps, and that a violation of these conditions is responsible for the violation of the inequality in your example above...is that incorrect?
billschnieder said:
Note that in deriving Bell's inequalities, Bell used Aa(l), Ab(l) Ac(l), where the hidden variables (l) are the same for all three angles.
If l represents something like the value of all local physical variables in the past light cone of the region where the measurement (and the decision of what angle to set the detector) was made, then the measurement angle cannot have a retroactive effect on the value of l, although it is possible that the value of l will itself affect the experimenter's choice of detector angle. Is it the latter possibility you're worried about? The proof of Bell's theorem does usually include a "no-conspiracy" assumption where it's assumed that the probability the particles will have different possible predetermined spins on each detector angle is independent of the probability that the experimenter will choose different possible detector angles.
billschnieder said:
For this to correspond to the Aspect-type experimental situation, the hidden variables must be exactly the same for all the angles, which is an unreasonable assumption because each particle could have it's own hidden variables with the measurement equipment each having their own hidden variables, and the time of measurement after emission itself a hidden variable.
In the case of certain inequalities like the one that says the probability of identical results when different angles are chosen must be greater than or equal to 1/3, it's assumed that there's a perfect correlation between the results whenever the experimenters choose the same angle; you can prove that the only way this is possible in a local realist universe is if the hidden variables already completely predetermine what results will be found for each detector setting, so if the hidden variables are restricted to the past light cones of the measurement regions then any additional hidden variables in the measurement regions can't affect the outcome. I discussed this in post 61/62 of the other thread, along with the "no conspiracy" assumption. Other inequalities like the CHSH inequality and the one you mentioned don't require an assumption of perfect correlations, in these cases I'd have to think more about whether hidden variables associated with the measurement apparatus might affect the outcome, but Bell's original paper did derive an inequality based on the assumption of perfect correlations for identical measurement settings.

But here we're going somewhat astray from the original question of whether P(AB|H)=P(A|H)P(B|H) is justified. Are you ever going to address my arguments about past light cones in post #41 using your own words, rather than just trying to discount it with quotes from the Stanford Encyclopedia article which turned out to be irrelevant?
 
  • #79
DrChinese said:
You critique is hardly new, and has been thoroughly rejected. You do everything possible to ignore dealing directly with the issues that make Bell important.
On the contrary, I have. I do not see that you have responded to any of the issues I have raised:

1) I have demonstrated, I believe very convincingly that it is possible to violate Bell-type inequalities by simply collecting data in such a way that the cyclicity is not preserved.
2) I have shown using an example which is macroscopic,so that there is no doubt in any reasonable persons' mind that it is local and real. Yet by not knowing the exact nature of the hidden elements of reality in play, it is very easy to violate the Bell-type inequalities.
3) Further more, I have given specific reasons why the inequality was violated, by providing an explanation for how the hidden elements are generating the data, that is locally causal. It is therefore very clear that violation of the inequalities in the example I provided is NOT due to spooky action at a distance.
4) For this macroscopic example, I have used the Bell-type inequality normally used in macroscopic situations (the Leggett–Garg inequality), which is violated by QM and was supposed to prove that the time evolution of a system cannot be understood classically. My example which is locally causal, real and classical also violates the inequality, that should be a big hint -- QM and the local reality agree with each other here. Remember that QM and Aspect type experiments also agree with each other.
5) The evidence is very clear to me. On the one hand we have QM and experiments, which agree with each other. On the other hand we have Bell-type inequalities violated by everything. There is only one odd-man in the mix and it is neither QM nor the experiments. Evidence is evidence.
6) Seeing that Bell-type inequalities are the odd-man out, my interest in this thread was to discuss how (a) Bell's ansatz represents the situation he is trying to model, and (b) how the inequalities derived from the ansatz are comparable to actual experiments performed. The argument mentioned in my first post can therefore be expanded as follows:

i) Bell's ansatz (equation 2 in his paper) correctly represent all local-causal theories
ii) Bell's ansatz necessarily leads to Bell's inequalities
iii) Aspect-type Experiments are comparable to Bell's inequalities
iv) Aspect-type Experiments violate Bell's inequalities
Conclusion: Therefore the real physical situation of Aspect-type experiments is not locally causal.

In order for the conclusion to be valid, all the premises (i to iv) must be valid. Failure of anyone is sufficient to kill the argument. We have discussed the validity of (i), JesseM says it is justified I say it is not, for reasons I have explained and we can agree to disagree there. However, even if JesseM is right and I don't believe he is, (iii) is not justified as I have shown. Therefore the conclusion is not valid.

I have already admitted that (ii) and (iv) have been proven. So bringing up Rowe doesn't say anything new I have not already admitted. You only need to look at equation (2) in Rowe's paper to see that the same issue with cyclicity and incomplete indexing applies. Do you understand the difference between incomplete indexing and incomplete detection. You could collect 100% of data with detection efficiency of 100% and still violate the inequality if the data is not indexed to maintain cyclicity for all hidden elements of reality in play.

You may disagree with everything I have said but any reasonable person will agree that failure of anyone of those premises (i to iv), invalidates the argument. It is therefore proper to discuss the validity of each one.

Through out this thread, I and many others have used examples with cards, balls, fruits etc to explain a point because it is easier to visualize. The doctors and patients example is no different.
 
  • #80
billschnieder said:
I have already admitted that (ii) and (iv) have been proven. So bringing up Rowe doesn't say anything new I have not already admitted. You only need to look at equation (2) in Rowe's paper to see that the same issue with cyclicity and incomplete indexing applies. Do you understand the difference between incomplete indexing and incomplete detection. You could collect 100% of data with detection efficiency of 100% and still violate the inequality if the data is not indexed to maintain cyclicity for all hidden elements of reality in play.

And I have already said that all science falls to the same argument you present here. You may as well be claiming that General Relativity is wrong and Newtonian gravity is correct, and that there is a cyclic component that makes it "appear" as if GR is correct. Do you not see the absurdity?

When you discover the hidden cycle, you can collect the prizes due. Meanwhile, you may want to consider WHY PDC photons pairs with KNOWN polarization don't act as you predict they should. That should be a strong hint that you are factually wrong even in this absurd claim.
 
  • #81
billschnieder:

In other words: saying that you can find a "cyclic" solution to prove Bell wrong is easy. I am challenging you to come up with an ACTUAL candidate to back up your claim. Then you will see that Bell is no laughing matter. This is MY claim, that I can take down any example you provide. Remember, no doctors in Africa; we are talking about entangled (and perhaps unentangled) PDC photon pairs.

You must be able to provide a) perfect correlations (Bell mentions this). You must be able to provide b) detection rates that are rotationally invariant (to match the predictions of QM). The results must c) form a random pattern with p(H)=p(V)=50%. And of course, you must be able to provide d) reasonable agreement with the cos^(theta) rule for the subset with respect e) for Bell's Inequality in the full universe.

Simply show me the local realistic dataset/formulae.
======================================

I already did this exercise with a model that has had years of work put in it, the De Raedt model. It failed, but perhaps you will fare better. Good luck! But please note, unsubstantiated claims (especially going against established science) are not well received around here. You have placed one out here, and you should either retract it or defend it.
 
  • #82
JesseM said:
I'm not familiar with the Leggett-Garg inequality, and wikipedia's explanation is not very clear. But I would imagine any derivation of the inequality assumes certain conditions hold, like the experimenters being equally likely to choose any detection setting on each trial perhaps, and that a violation of these conditions is responsible for the violation of the inequality in your example above...is that incorrect?

What we have been discussing is the assumption that whatever is causing the correlations is randomly distributed in the data taking process, in other words, it has been screened-off. The short of it is that, your justification for using the PCC as a definition for local causality requires that it is always possible to screen-off the correlation. But experimentally it is impossible to screen-off a correlation if you have no clue what is causing it.

It doesn't look like we are going to agree on any of this. So we can agree to disagree.

If l represents something like the value of all local physical variables in the past light cone of the region where the measurement (and the decision of what angle to set the detector) was made, then the measurement angle cannot have a retroactive effect on the value of l, although it is possible that the value of l will itself affect the experimenter's choice of detector angle. Is it the latter possibility you're worried about?
Defining I vaguely as all physical variables in the past light cone of the region of measurement, fails to consider the fact that not all subsets of I may be at play in the case of A, or B. Some subsets of I may actually be working against the result while others are working for it. Those supporting the result at A may be working against the result at B and vice versa. That is why it is mentioned in the stanford encyclopedia that it is not always possible to define I such that it screens off the correlation. It is even harder if you have no idea what is actually happening.

Another way of looking at it is as follows. If your intention is to say I is so broad as to represent all pre-existing facts in the local universe, then there is no reason to even include it because the marginal probability says the same thing. What in your opinion then will be the effective difference between P(A) and P(A|I)? There will be none and your equation returns to P(AB) = P(A)P(B) which contradicts the fact that there is a marginal correlation between A and B.

Remember that P(A) = P(A,H) + P(A,notH)

If H is defined so broadly such that P(H) = 1, then P(notH) = 0, and P(A) = P(AH).

As I already explained, in probability theory, lack of causal relationship is not a sufficient justification to assume lack of logical relationship. In the Bell situation, we are not just interested in an angle but a joint result between two angles. It is not possible to determine that a coincidence has happened without taking the other result into account, so including a P(B|AH) term is not because we think A is causing B but because we will be handling the data in a joint manner. I don't expect us to agree on this. So we can agree to disagree.

Other inequalities like the CHSH inequality and the one you mentioned don't require an assumption of perfect correlations, in these cases I'd have to think more about whether hidden variables associated with the measurement apparatus might affect the outcome, but Bell's original paper did derive an inequality based on the assumption of perfect correlations for identical measurement settings.
The key word is "cyclicity" here. Now let's look at various inequalities:

Bell's equation (15):
1 + P(b,c) >= | P(a,b) - P(a,c)|
a,b, c each occur in two of the three terms. Each time together with a different partner. However in actual experiments, the (b,c) pair is analyzed at a different time from the (a,b) pair so the bs are not the same. Just because the experimenter sets a macroscopic angle does not mean that the complete microscopic state of the instrument, which he has no control over is in the same state.

CHSH:
|q(d1,y2) - q(a1,y2)| + |q(d1,b2)+q(a1,b2)| <= 2
d1, y2, a1, b2 each occur in two of the four terms. Same argument above applies.

Leggett-Garg:
Aa(.)Ab(.) + Aa(.)Ac(.) + Ab(.)Ac(.) >= -1

I have already explained this one.

But here we're going somewhat astray from the original question of whether P(AB|H)=P(A|H)P(B|H) is justified.
Not at all, see my last post for an explanation why. Your arguments boil down to the assertion that PCC is universally valid for locally causal hidden variable theories. I disagree with that, you disagree with me. I have presented my arguments, you have presented yours so be it. We can agree to disagree about that, there is no need to keep going "no it doesn't ... yes it does ... no it doesn't ... etc".
 
  • #83
billschnieder said:
...The key word is "cyclicity" here. Now let's look at various inequalities:

Bell's equation (15):
1 + P(b,c) >= | P(a,b) - P(a,c)|
a,b, c each occur in two of the three terms. Each time together with a different partner. However in actual experiments, the (b,c) pair is analyzed at a different time from the (a,b) pair so the bs are not the same.

Oops... Are you sure about that?

If b is not the same b... How does it work out for the (b, b) case?

:eek:

You see, b and b can be tested at all different times too! According to your model, you won't get Alice's b and Bob's b to be the same. Bell mentions this requirement. Better luck next time.
 
  • #84
JesseM said:
P(AB|H)=P(A|H)P(B|AH) is a general statistical identity that should hold regardless of the meanings of A, B, and H, agreed? So to get from that to P(AB|H)=P(A|H)P(B|H), you just need to prove that in this physical scenario, P(B|AH)=P(B|H), agreed? If you agree, then just let H represent an exhaustive description of all the local variables (hidden and others) at every point in spacetime which lies in the past light cone of the region where measurement B occurred. If measurement A is at a spacelike separation from B, then isn't it clear that according to local realism, knowledge of A cannot alter your estimate of the probability of B if you were already basing that estimate on H, which encompasses every microscopic physical fact in the past light cone of B? To suggest otherwise would imply FTL information transmission, as I argued in post #41.

Based on H, which includes all values for |a-b|, the angular difference between the polarizer settings, and all values for |La - Lb|, the emission-produced angular difference between the optical vectors of the disturbances incident on the polarizer settings, a and b, respectively, then when, eg., |a-b| = 0 and |La - Lb| = 0, then P(B|AH) /= P(B|H).

In this case, we can, with certainty, say that if A = 1, then B = 1, and if A = 0, then B = 0. So, our knowledge of the result at A can alter our estimate of the probability of B without implying FTL information transmission.

P(B|AH) /= P(B|H) also holds for |a-b| = 90 degrees (with |La - Lb| = 0) without implying FTL information transmission.

The confluence of the consideration in the OP and the observation that the individual and joint measurement contexts involve different variables doesn't seem to allow the conclusion that violation of BIs imply ftl info transmission, but rather allows only that there's a disparity between Bell's ansatz and the experimental situations to which it's applied based on an inappropriate modelling requirement.

For an accurate account of the joint detection rate, P(AB) must be expressed in terms of the joint variables which determine it. Assuming that |La - Lb| = 0 for all entangled pairs, then the effective independent variable in a local account of the joint measurement context becomes |a-b|. Hence the nonseparability of the qm treatment of an experimental situation where crossed polarizers are jointly analyzing a single-valued optical vector.
 
  • #85
ThomasT said:
Based on H, which includes all values for |a-b|, the angular difference between the polarizer settings, and all values for |La - Lb|, the emission-produced angular difference between the optical vectors of the disturbances incident on the polarizer settings, a and b, respectively, then when, eg., |a-b| = 0 and |La - Lb| = 0, then P(B|AH) /= P(B|H).

In this case, we can, with certainty, say that if A = 1, then B = 1, and if A = 0, then B = 0. So, our knowledge of the result at A can alter our estimate of the probability of B without implying FTL information transmission.

P(B|AH) /= P(B|H) also holds for |a-b| = 90 degrees (with |La - Lb| = 0) without implying FTL information transmission.

The confluence of the consideration in the OP and the observation that the individual and joint measurement contexts involve different variables doesn't seem to allow the conclusion that violation of BIs imply ftl info transmission, but rather allows only that there's a disparity between Bell's ansatz and the experimental situations to which it's applied based on an inappropriate modelling requirement.

For an accurate account of the joint detection rate, P(AB) must be expressed in terms of the joint variables which determine it. Assuming that |La - Lb| = 0 for all entangled pairs, then the effective independent variable in a local account of the joint measurement context becomes |a-b|. Hence the nonseparability of the qm treatment of an experimental situation where crossed polarizers are jointly analyzing a single-valued optical vector.

ThomasT: Here is an important issue with your assumptions. Suppose I take a group of photon pairs that have the joint detection probabilities, common causes, and other relationships you describe above. This group, I will call NPE. Since they satisfy your assumptions, without any argument from me, they should produce Entangled State stats (cos^2(A-B) ). However, when we run an experiment on them, they actually produce Product State stats!

On the other hand, we take a group of photon pairs closely resembling your assumptions, but which I say do NOT fit exactly. We will call this group PE. These DO produce Entangled State stats.

NPE=Non-Polarization Entangled
PE=Polarization Entangled

Why doesn't the NPE group produce Entangled State stats? This is a serious deficiency in every local hidden variable account I have reviewed to date. If I produce a group that satisfies your assumptions without question, then that group should produce according to your predictions without question. That just doesn't happen. I hope this will spur you to re-think your approach.
 
  • #86
DrChinese said:
ThomasT: Here is an important issue with your assumptions. Suppose I take a group of photon pairs that have the joint detection probabilities, common causes, and other relationships you describe above. This group, I will call NPE. Since they satisfy your assumptions, without any argument from me, they should produce Entangled State stats (cos^2(A-B) ). However, when we run an experiment on them, they actually produce Product State stats!

On the other hand, we take a group of photon pairs closely resembling your assumptions, but which I say do NOT fit exactly. We will call this group PE. These DO produce Entangled State stats.

NPE=Non-Polarization Entangled
PE=Polarization Entangled

Why doesn't the NPE group produce Entangled State stats? This is a serious deficiency in every local hidden variable account I have reviewed to date. If I produce a group that satisfies your assumptions without question, then that group should produce according to your predictions without question. That just doesn't happen. I hope this will spur you to re-think your approach.
The assumptions are that the relationship between the disturbances incident on the polarizers is created during the emission process, and that |La - Lb| is effectively 0 for all entangled pairs. The only way these assumptions can be satisfied is by actually creating the relationship between the disturbances during the emission process. And these assumptions are compatible with PE stats.

The NPE group doesn't satisfy the above assumptions.
 
  • #87
ThomasT said:
The assumptions are that the relationship between the disturbances incident on the polarizers is created during the emission process, and that |La - Lb| is effectively 0 for all entangled pairs. The only way these assumptions can be satisfied is by actually creating the relationship between the disturbances during the emission process. And these assumptions are compatible with PE stats.

The NPE group doesn't satisfy the above assumptions.

And why not? You have them created simultaneously from the same process. The polarization value you mention is exactly 0 for ALL pairs. And they are entangled, just not polarization entangled. Care to explain your position?
 
  • #88
DrChinese said:
And why not? You have them created simultaneously from the same process. The polarization value you mention is exactly 0 for ALL pairs. And they are entangled, just not polarization entangled. Care to explain your position?
If they're not polarization entangled, then |La - Lb| > 0.
 
  • #89
ThomasT said:
If they're not polarization entangled, then |La - Lb| > 0.

Oops, that is not correct at all. For Type I PDC, they are HH>. For Type II, they are HV>.

That means, when you observe one to be H, you know with certainty what the polarization of the other is. Couldn't match your requirement any better. Spin conserved and known to have a definite value. This is simply a special case of your assumption. According to your hypothesis, these should produce Entangled State statistics. But they don't.

(Now of course, you deny that there is such a thing as Entanglement in the sense that it can be a state which survives the creation of the photon pair. But I don't.)
 
  • #90
looking at the overall big picture here, and this may indeed be a stretch. with the probability that future events effect past events and the entanglement observed in photons. I wonder about the neurological association, the synaptic cleft is 20nm right? so where dealing about quantum effects in memory.

From the perspective of how we associate time internally, it's based on memory recall to the past, if your memory recall or recording ability is altered your sense of time is too. Is it possible that the entanglement goes further than the experiment tests? So that if a future measurement changes the past then it also changes your memory due to the overall entanglement? how would one even know it occurred?

looking at the photon in each time frame the energy should be the same or it violates the conservation of energy. Then it's everywhere in each time frame, if it's everywhere then there is no such thing as discreet time for a photon. Am I twisting things up too much here?
 
Last edited:
  • #91
DrChinese said:
Oops, that is not correct at all. For Type I PDC, they are HH>. For Type II, they are HV>.

That means, when you observe one to be H, you know with certainty what the polarization of the other is.
Not exactly. Since we don't (can't) know, and therefor can't say anything about, what the values of La and Lb are, then we can only say that, for |a-b| = 0 and 90 degrees, then if the polarizer at one end has transmitted a disturbance that resulted in a detection, then we can deduce what the result at the other end will be.

Since they don't produce entangled state stats, then presumably there's a range of |La - Lb| > 0 that allows the contingent deductions for |a-b| = 0 and 90 degrees, but not the full range of entangled state stats.

Anyway, La and Lb don't even have to represent optical vectors. |La - Lb| can be taken to denote the relationship between any relevant local hidden variable subset(s) of H. Or we can just leave it out. I'm not pushing an lhv description. I think that's impossible. This thread is discussing why that's impossible.

The point is that P(B|AH) /= P(B|H) holds for certain polarizer settings without implying ftl info transmission.

Since this violation of P(AB|H) = P(A|H)P(B|H) doesn't imply ftl info transmission, then P(AB|H) = P(A|H)P(B|H) isn't a locality condition, but rather, strictly speaking, it's a local hidden variable condition.

Per the OP, since P(AB|H) = P(A|H)P(B|H) doesn't hold for all settings, then it can't possibly model the situation that it's being applied to.

Per me, since P(AB|H) = P(A|H)P(B|H) requires that joint detection rate be expressed in terms of individual variable properties which don't determine it, then it can't possibly model the situation that it's being applied to.

The point of Bell's analysis was that lhv theories are ruled out because they would have to be in the separable form that he specified, and, as he noted, "the statistical predictions of quantum mechanics are incompatible with separable predetermination".
 
  • #92
DrChinese said:
(Now of course, you deny that there is such a thing as Entanglement in the sense that it can be a state which survives the creation of the photon pair. But I don't.)
I don't know what you mean here. Could you elaborate please?
 
  • #93
ThomasT said:
I don't know what you mean here. Could you elaborate please?

If one pushes local realism, one is asserting there is no ongoing connection between Alice and Bob. QM denies this. The connection is that Alice = Bob (at same settings) for any setting.
 
  • #94
ThomasT said:
Not exactly. Since we don't (can't) know, and therefor can't say anything about, what the values of La and Lb are, then we can only say that, for |a-b| = 0 and 90 degrees, then if the polarizer at one end has transmitted a disturbance that resulted in a detection, then we can deduce what the result at the other end will be.

...

But you say that photon pairs with a joint common cause (or however you term it) and a definite polarization should produce Entangled State stats. They don't. Your assumption cannot be correct. Only ENTANGLED photons - pairs in a superposition - have the characteristic that they produce Entangled State statistics.

According to your revised explanation above, photons with the special case where we have HH> at 0 degrees should have HH> or VV> whenever A-B=0. But they don't, as I mention. Instead they have Product State stats. Hey, if the special case fails, how does your general case hold?
 
  • #95
DrChinese said:
But you say that photon pairs with a joint common cause (or however you term it) and a definite polarization should produce Entangled State stats. They don't. Your assumption cannot be correct. Only ENTANGLED photons - pairs in a superposition - have the characteristic that they produce Entangled State statistics.
Entangled state stats are compatible with the assumption that the photons have a local common cause, say, via the emission process (you can interpret the emission models this way). It's just that you can't denote the entangled state in terms of the individual properties of the separated photons -- because that's not what's being measured in the joint context.

DrChinese said:
According to your revised explanation above, photons with the special case where we have HH> at 0 degrees should have HH> or VV> whenever A-B=0. But they don't, as I mention.
You said that there are cases where pdc photons exhibit the |a-b| = 0 and 90 degrees perfect correlations, but not the polarization entanglement stats. And I said ok, but that doesn't diminish the fact that assuming a local common cause for photons that do produce polarization entanglement stats is compatible with the perfect correlations and hence P(B|H) /= P(B|AH) holds for the detection contingencies at those angles without implying ftl info transmission.
 
  • #96
DrChinese said:
If one pushes local realism, one is asserting there is no ongoing connection between Alice and Bob. QM denies this. The connection is that Alice = Bob (at same settings) for any setting.
I don't follow. Are you saying that qm says there's a nonlocal 'connection' between the observers? I don't think you have to interpret it that way.
 
  • #97
ThomasT said:
I don't follow. Are you saying that qm says there's a nonlocal 'connection' between the observers? I don't think you have to interpret it that way.

Sure it does. There is a superposition of states. Observation causes collapse (whatever that is) based on the observation. According to EPR, that makes Bob's reality dependent on Alice's decision. Now, both EPR and Bell realized there were 2 possibilities: either QM is complete (no realism possible) or there is spooky action at a distance (locality not respected). But either way, the superposition means there is something different going on than a classical mixed state.

A local realist denies this, saying that there is no superluminal influence and that QM is incomplete because a greater specification of the system is possible. But Bell shows that QM, if incomplete, is simply wrong. That's a big pill to swallow, given 10,000 experiments (or whatever) that say it isn't.
 
  • #98
DrChinese said:
Observation causes collapse (whatever that is) based on the observation. According to EPR, that makes Bob's reality dependent on Alice's decision.
Or, we can assume that the correlated events at A and B have a local common cause. And, standard qm is not incompatible with that assumption.

DrChinese said:
Now, both EPR and Bell realized there were 2 possibilities: either QM is complete (no realism possible) or there is spooky action at a distance (locality not respected).
EPR said that either qm is INcomplete (local realism possible) or there is spooky action at a distance (local realism impossible -- a detection at one end is instantaneously determining the reality at the other end -- in which case locality would be out the window). Qm is obviously incomplete as a physical description of the underlying reality. All you have to do is look at the individual results, wrt which, by the way, qm isn't incompatible with an lhv account of, to ascertain that. (But that doesn't entail that a viable lhv account of entanglement is possible.) The reason that the qm treatment is a 'complete', in a certain sense, account of the joint entangled situation is because the information necessary to predict individual detection SEQUENCES isn't necessary to predict joint detection RATES. But, again obviously, qm isn't, in the fullest sense, a complete account of the joint entangled context either, because it can't predict the order, the sequences, of the coincidental results. It can only predict the coincidence RATE, and for that all that's needed is |a - b| and the assumption that whatever |a - b| is analyzing is the same at both ends for any given coincidence window -- and that relationship, that sameness, is compatible with the assumption of a local common cause (even if qm doesn't explicitly say that, but, as I've mentioned, the emission model(s) can be interpreted that way).

DrChinese said:
But either way, the superposition means there is something different going on than a classical mixed state.
I agree. We infer that the superposition (via the preparation) has been experimentally realized when we observe that the entangled state stats have been produced -- which differ from the classical mixed state stats. But this has nothing to do with the argument(s) presented in this thread.

DrChinese said:
A local realist denies this, saying that there is no superluminal influence and that QM is incomplete because a greater specification of the system is possible.
I think we agree that lhv theories of entangled states are ruled out. We just differ as to why they're ruled out. But it's an important difference, and one worth discussing. I don't think that a greater specification of the system, beyond what qm offers, is possible. But I also think that it's important to understand why this doesn't imply nonlocality or ftl info transmission.

I do very much appreciate your comments and questions as they spur me to refine how I might communicate what I intuitively see.

DrChinese said:
But Bell shows that QM, if incomplete, is simply wrong. That's a big pill to swallow, given 10,000 experiments (or whatever) that say it isn't.
Qm, like any theory, can be an incomplete description of the underlying physical reality without being just simply wrong. I think Bell showed just what he said he showed, that a viable specification of the entangled state (ie., the statistical predictions of qm) is incompatible with separable predetermination. However, in showing that, he didn't show that separable predetermination is impossible in Nature, but only that the hidden variables which would determine individual detection SEQUENCES are not relevant wrt determining joint detection RATES. A subtle, but important, distinction.

Regarding billschnieder's argument, I'm not sure that what he's saying is equivalent to what I'm saying, but it seems to accomplish the same thing wrt Bell's ansatz, which is that it can't correctly model entanglement setups. (billschnieder might hold the position, with eg. 't hooft et al., that some other representation of local reality might be possible which would violate BIs, or that could be the basis for a new quantitative test which qm and results wouldn't violate. That isn't my position. I agree with Bell, you et al. who think that Bell's ansatz is the only form that an explicit lhv theory of entanglement can take, but since this form can't possibly model the situation it's being applied to, independent of the tacit assumption of locality, then lhv theories of entanglement are ruled out independent of the tacit assumption of locality. We simply can't explicate that tacit assumption wrt the joint context because that would require us to express the joint results in terms of variables which don't determine the joint results.)

Anyway, it seems that we can dispense with considerations of the minimum and maximum propagation rates of entangled particle 'communications' and, hopefully, focus instead on the real causes of the observed correlations. Quantum entanglement is a real phenomenon, and it's certainly reasonable to suppose that it's a result of the dynamical laws which govern any and all waves in any and all media. That is, it's reasonable to suppose that there are fundamental wave dynamics which apply to any and all scales of behavior.

After all, why is qm so successful? Could it be because wave behavior in undetectable media underlying quantum instrumental phenomena isn't essentially different than wave behavior in media that we can see?

With apologies to billschnieder for my ramblings, and returning the focus of this thread to billschnieder's argument, I think that he's demonstrated the inapplicability of Bell's ansatz to the joint entangled situation. And, since P(B|H)/=P(B|AH) holds without implying ftl info transmission, then the inapplicability of P(AB|H)=P(A|H)P(B|H) doesn't imply ftl info transmission.

Beyond this, the question of whether ANY lhv theory of entanglement is possible might be considered an open question. My answer is no based on the following consideration: All disproofs of lhv theories, including those not based directly on Bell's anstatz, involve limitations on the range of entanglement predictions due to explicitly local hidden variables a la EPR. But it's been shown that these variables are mooted in the joint (entangled) situation and explicit lhv formulations of entanglement bring us back to Bell's ansatz or some variation of it. So, lhv theories (of the sort conforming to EPR's notion of local reality anyway) seem to be definitively ruled out.
 
  • #99
Apologies to billschnieder if I've got his argument wrong.

The usual:
1) Bell's ansatz correctly represents local-causal hidden variables
2). Bell's ansatz necessarily leads to Bell's inequalities
3). Experiments violate Bell's inequalities
Conclusion: Therefore the real physical situation of the experiments is not Locally causal.

Per billschnieder:
1) Bell's ansatz incorrectly represents local-causal hidden variables
2) Bell's ansatz necessarily leads to Bell's inequalities
3) Experiments violate Bell's inequalities
Conclusion: Therefore the real physical situation of the experiments is incorrectly represented by Bell's ansatz.

Per ThomasT:
1) Bell's ansatz correctly represents local-causal hidden variables
2) Bell's ansatz incorrectly represents the relationship between the local-causal hidden variables
3) The experimental situation is measuring this relationship
4) Bell's ansatz necessarily leads to Bell inequalities
5) Experiments violate Bell's inequalities
Conclusion: Therefore the real physical situation of the experiments is incorrectly represented by Bell's ansatz.

or to put it another way:
1) Bell's ansatz is the only way that local hidden variables can explicitly represent the experimental situation
2) This representational requirement doesn't express the relationship between the hidden variables
3) The experimental situation is measuring this relationship
etc.
Conclusion: Therefore the real physical situation of the experiments is, necessarily, incorrectly represented by Bell's ansatz.

We can continue with:
1) Any lhv representation of the experimental situation must conform to Bell's ansatz or some variation of it.
Then given the foregoing we can Conclude:
Therefore lhv representations of entanglement are impossible.

But of course, per billschnieders original point, this doesn't tell us anything about Nature.
 
  • #100
JesseM, I'm here to learn. You didn't reply to my reply to your post where you stated:

JesseM said:
If measurement A is at a spacelike separation from B, then isn't it clear that according to local realism, knowledge of A cannot alter your estimate of the probability of B if you were already basing that estimate on H, which encompasses every microscopic physical fact in the past light cone of B? To suggest otherwise would imply FTL information transmission ...
This isn't yet clear to me. If we assume a relationship between the polarizer-incident disturbances due to a local common origin (say, emission by the same atom), then doesn't the experimental situation allow that both Alice and Bob know at the outset (ie., the experimental preparation is in the past light cones of both observers) that if A=1 then B=1 and if A=0 then B=0 (and if A=1 then B=0, and vice versa) for certain settings without implying FTL transmission?

In a reply to billschnieder you stated:

JesseM said:
... if P(B|L) was not equal to P(B|LA), that would imply P(A|L) is not equal to P(A|BL), meaning that learning B gives us some additional information about what happened at A, beyond whatever information we could have learned from anything in the past light cone of B ...
I agree that if P(B|L) /= P(B|AL) then P(A|L) /= P(A|BL), but doesn't the correctness of both of those expressions follow from the contingencies for certain settings which follow from the experimental preparation which is in the past light cones of both A and B?

So, it does seem that P(AB|L) /= P(A|L)P(B|L) without implying FTL transmission.

In another reply to billschnieder you stated:

JesseM said:
Consider the possibility that you may not actually understand everything about this issue, and therefore there may be points that you are missing. The alternative, I suppose, is that you have no doubt that you already know everything there is to know about the issue, and are already totally confident that your argument is correct and that Bell was wrong to write that equation ...
Is it possible that the equation is wrong for the experimental situation, but that Bell was, in a most important sense, correct to write it that way vis EPR? Haven't hidden variables historically (vis EPR) been taken to refer to underlying parameters that would affect the prediction of individual results? If so, then wouldn't a formulation of the joint situation in terms of that variable have to take the form of Bell's ansatz? If so, then Bell's ansatz is, in that sense, correct. However, what if the underlying parameter that's being jointly measured isn't the underlying parameter that determines individual results? For example, if it's the relationship between the optical vectors of disturbances emitted during the same atomic transition, and not the optical vectors themselves, that's being jointly measured, then wouldn't that require a different formulation for the joint situation?

Do the assumptions that (1) this relationship is created during the common origin of the disturbances via emission by the same atom, and that (2) it therefore exists prior to measurement by the crossed polarizers, and that (3) counter-propagating disturbances are identically polarized (though the polarization vector of any given pair is random and indeterminable), contradict the qm treatment of this situation? If not, then might the foregoing be taken as an understanding of violations of BIs due to nonseparability of the joint entangled state?

I think that Bell showed just what he said he showed -- that the statistical predictions of qm are incompatible with separable predetermination. Which, according to my attempt at disambiguation, means that joint experimental situations which produce (and for which qm correctly predicts) entanglement stats can't be viably modeled in terms of the variable or variables which determine individual results.

Any criticisms of, or comments on, any part of the above viewpoint are appreciated.
 
  • #101
ThomasT said:
Is it possible that the equation is wrong for the experimental situation, but that Bell was, in a most important sense, correct to write it that way vis EPR? Haven't hidden variables historically (vis EPR) been taken to refer to underlying parameters that would affect the prediction of individual results? If so, then wouldn't a formulation of the joint situation in terms of that variable have to take the form of Bell's ansatz? If so, then Bell's ansatz is, in that sense, correct. However, what if the underlying parameter that's being jointly measured isn't the underlying parameter that determines individual results? For example, if it's the relationship between the optical vectors of disturbances emitted during the same atomic transition, and not the optical vectors themselves, that's being jointly measured, then wouldn't that require a different formulation for the joint situation?

Do the assumptions that (1) this relationship is created during the common origin of the disturbances via emission by the same atom, and that (2) it therefore exists prior to measurement by the crossed polarizers, and that (3) counter-propagating disturbances are identically polarized (though the polarization vector of any given pair is random and indeterminable), contradict the qm treatment of this situation? If not, then might the foregoing be taken as an understanding of violations of BIs due to nonseparability of the joint entangled state?

Any criticisms of, or comments on, any part of the above viewpoint are appreciated.

QM does NOT imply that there anything exists prior to and independent of measurement, as we have told you at least 106 times. There are no local counter-propagating influences in the sense you describe either.
 
  • #102
DrChinese said:
QM does NOT imply that there anything exists prior to and independent of measurement, as we have told you at least 106 times. There are no local counter-propagating influences in the sense you describe either.
I think you need to look at the emission models relevant to FandC and Aspect experiments.

Why don't you reply to this?:

ThomasT said:
Is it possible that the equation is wrong for the experimental situation, but that Bell was, in a most important sense, correct to write it that way vis EPR? Haven't hidden variables historically (vis EPR) been taken to refer to underlying parameters that would affect the prediction of individual results? If so, then wouldn't a formulation of the joint situation in terms of that variable have to take the form of Bell's ansatz? If so, then Bell's ansatz is, in that sense, correct. However, what if the underlying parameter that's being jointly measured isn't the underlying parameter that determines individual results? For example, if it's the relationship between the optical vectors of disturbances emitted during the same atomic transition, and not the optical vectors themselves, that's being jointly measured, then wouldn't that require a different formulation for the joint situation?

or this?:
ThomasT said:
I think that Bell showed just what he said he showed -- that the statistical predictions of qm are incompatible with separable predetermination. Which, according to my attempt at disambiguation, means that joint experimental situations which produce (and for which qm correctly predicts) entanglement stats can't be viably modeled in terms of the variable or variables which determine individual results.
 
  • #103
ThomasT said:
I think you need to look at the emission models relevant to FandC and Aspect experiments.

Why don't you reply to this?:

...

or this?:

OK, in my opinion it is meaningless. Hey, you asked.

(I usually don't reply if I don't have something nice to say. Unless of course I'm pissed off.)
 
  • #104
DrChinese said:
OK, in my opinion it is meaningless. Hey, you asked.
I don't think you understand it. Anyway, the post was directed at JesseM.

DrChinese said:
(I usually don't reply if I don't have something nice to say. Unless of course I'm pissed off.)
We both know that isn't true.

Like I said in the other thread, you're making my case.
 
  • #105
DrChinese said:
OK, in my opinion it is meaningless.
You mean like this:

DrChinese said:
... Because I accept Bell, I know the world is either non-local or contextual (or both). If it is non-local, then there can be communication at a distance between Alice and Bob. When Alice is measured, she sends a message to Bob indicating the nature of the measurement, and Bob changes appropriately. Or something like that, the point is if non-local action is possible then we can build a mechanism presumably which explains entanglement results.
I hope that works out for you.
 

Similar threads

Replies
0
Views
675
Replies
80
Views
4K
  • Quantum Physics
Replies
16
Views
2K
  • Quantum Physics
Replies
5
Views
1K
Replies
1
Views
827
Replies
71
Views
3K
Replies
4
Views
1K
Replies
50
Views
3K
Replies
1
Views
1K
  • Quantum Physics
Replies
1
Views
758
Back
Top