Is Bell's Theorem a Valid Solution to the Locality Versus Nonlocality Issue?

In summary, Bell's theorem is a mathematical truth that states that it cannot be violated by any experiment when applied to two-valued variables. However, violations may occur if the conditions of the theorem are not met. Two examples using a coin tossing experiment were given to demonstrate this. In the EPRB experiments, there is a one-to-one mapping of the three sequences in Example 1 for ab, bc, and ac. However, in the EPRB experiments, only one angle can be measured at a time, resulting in six necessary sequences that may cause a violation of Bell's theorem. This raises the question of whether Bell's theorem can be used to resolve the issue of locality versus nonlocality. Some argue that Bell's theorem may
  • #141
DrChinese said:
The outcome VALUE of an observable must pre-exist. I think it is clear that is what EPR thought. Else what are you asserting as being realism? Gimme a useful definition that others might be able to use.

1. What do you mean, please, by the outcome VALUE.

2. What is the value associated with a spin-half particle, "spin-up at 45 degrees"?

3. What is the value associated with a pristine spin-half particle, "entangled"?

4. Is it not the case that Bell [1964: equation (1)] assigns the VALUES ± 1 to outcomes?

5. Surely ± 1-s don't pre-exist?

6. AS FOR FOR ME, a dedicated local realist: In Bell (1964), pristine lambda represents the INITIAL VALUE of the following REAL hidden-variable: the orientation of each pristine (and entangled) particle's principal axis associated with total spin.

7. After "measurement", lambda remains a principal axis: BUT it is now that associated (in QM) with intrinsic spin.

8. So we have the transformation of pristine lambda (perturbed by "measurement") to a new variable [lambda --> +a, say] as a result of the "measurement" interaction.

I'd welcome clarification, or your opinion, on each of these points.
 
Physics news on Phys.org
  • #142
Gordon Watson said:
1. What do you mean, please, by the outcome VALUE.

2. What is the value associated with a spin-half particle, "spin-up at 45 degrees"?

3. What is the value associated with a pristine spin-half particle, "entangled"?

4. Is it not the case that Bell [1964: equation (1)] assigns the VALUES ± 1 to outcomes?

5. Surely ± 1-s don't pre-exist?

6. AS FOR FOR ME, a dedicated local realist: In Bell (1964), pristine lambda represents the INITIAL VALUE of the following REAL hidden-variable: the orientation of each pristine (and entangled) particle's principal axis associated with total spin.

...

I'd welcome clarification, or your opinion, on each of these points.

1. 2. 4. You can label those outcome/result values (or whatever your choose to call them) however you like, +1 or -1 (following Bell), H or T, up or down, I fail to see what difference it makes. It is the local realist who is asserting these exist, not me. I tend to see the result of an observation as being "real".

3. 5. I don't think there is such. But I am not a realist anyway. :smile: (Hey, all my life I've been told my ideas are unrealistic so I guess that fits.)

6. This is obviously false and I am shocked you would state this as being your position. Even to a local realist it should be obvious that you don't get perfect EPR correlations from a (single/dual/triple) principal axis. You need a lot more encoded in those babies than a few bits to get the ability to predict spin with certainty.

However, I give you 10 points for at least owning up to some kind of a definition. That is more than more local realists will do. :biggrin:
 
  • #143
DrChinese said:
1. 2. 4. You can label those outcome/result values (or whatever your choose to call them) however you like, +1 or -1 (following Bell), H or T, up or down, I fail to see what difference it makes. It is the local realist who is asserting these exist, not me. I tend to see the result of an observation as being "real".

3. 5. I don't think there is such. But I am not a realist anyway. :smile: (Hey, all my life I've been told my ideas are unrealistic so I guess that fits.)

6. This is obviously false and I am shocked you would state this as being your position. Even to a local realist it should be obvious that you don't get perfect EPR correlations from a (single/dual/triple) principal axis. You need a lot more encoded in those babies than a few bits to get the ability to predict spin with certainty.

However, I give you 10 points for at least owning up to some kind of a definition. That is more than more local realists will do. :biggrin:
Added emphasis by GW.

Thanks Doc. SOS!

Why not view this [ ] before elaborating on point #6? I'm not much into believing in the "obviously false" -- nor overlooking the "should be obvious".
 
Last edited by a moderator:
  • #144
Gordon Watson said:
1. What do you mean, please, by the outcome VALUE.

2. What is the value associated with a spin-half particle, "spin-up at 45 degrees"?

3. What is the value associated with a pristine spin-half particle, "entangled"?

4. Is it not the case that Bell [1964: equation (1)] assigns the VALUES ± 1 to outcomes?

5. Surely ± 1-s don't pre-exist?
I think Gordon may still be unclear on the basic notion of the EPR/Bell argument involving predetermined values, so I want to elaborate a bit on DrChinese's answer here. ± 1 are indeed assigned to outcomes, but EPR/Bell assume that if by measuring particle #1 we can perfectly predict what the outcome will be if a given type of measurement is made on particle #2 far away (and at a spacelike separation), then in a local realistic theory there must have been some local "elements of reality" associated with particle #2 beforehand that predetermined what the outcome of such a measurement would be. It's not that the particle already has a property equal to that outcome, since the act of measurement could very well alter the particle's properties: for example, even if a particle was predetermined to show momentum p if its momentum was measured (and we can predict this by measuring the momentum of its entangled twin and invoking the rule that momentum is always measured to be conserved), its "hidden" momentum prior to measurement could have been some different value p', but it must in that case have had some local hidden variables that ensured that if a momentum measurement were performed on it, its momentum would change to the value p and that would be the observed outcome of the measurement.

Einstein uses the analogy of a pair of boxes, such that whenever we open one, if we see a ball inside then it's guaranteed no ball will be found when the other is opened, and vice versa. Under local realism, one way to explain this is just to say the boxes each already had the property of "ball inside" or "no ball inside" beforehand. But as I discussed in [post=3270631]this post[/post], you could also come up with more complicated explanations where the "ball inside" property did not itself exist prior to measurement, but they boxes had other properties which predetermined whether you'd see a ball inside when opened or not:
In terms of the box analogy, one might imagine that instead of one box containing a ball before being opened, they both contain computers connected to holographic projectors, and the computers can sense when the lid is being opened and depending on their programming they will either respond by projecting an image of a ball, or projecting the image of an empty box. In this case the local variables associated with each box would not consist of "ball" or "no ball", but rather would be a detailed specification of the programming of each computer. But it would still be true based on the separation principle and the perfect correlation between results that if one was programmed to project a ball when the box was opened, that must mean the other was programmed to project an empty box, so the local variables (the program of each computer) would still predetermine the fact that one would give the measurement result "saw a ball" and the other would give the result "didn't see a ball".
Gordon, if you don't think that particles have local hidden variables that predetermine the results they will give when measured in this scenario (which I guess in terms of your #6 would mean that even given advance knowledge of "the orientation of each pristine (and entangled) particle's principal axis associated with total spin" for each particle, we couldn't say in advance with perfect certainty whether it would give +1 or -1 if the spin was measured at a given angle), does that mean you think there is a random element in what result they give? If so, as a local realist how can you explain the fact (in this scenario) that whenever both experimenters choose the same property to measure, they are guaranteed to get the same (or opposite) results? If there was a random element to the outcomes, and the random events in the neighborhood of one particle couldn't have a nonlocal influence on random events in the neighborhood of the other in the case of measurements at a spacelike separation, wouldn't that mean there would be some nonzero probability they would fail to give the same (or opposite) results? Think of the box scenario with the holographic projectors, but suppose each box also has something like a true random number generator that determines whether it will project an image of a ball or not--since the two random numbers picked are statistically independent, even if the odds are stacked so it's unlikely they would both project holographic balls (say the first box is programmed to randomly pick a number from 1-100 and project a ball if the number is anywhere from 1 to 99, while the second box also picks a number from 1-100 and only projects a ball if the result is 1), there's always going to be a nonzero probability they both will (in this case the probability both will project images of balls is (99/100)*(1/100)).
 
Last edited:
  • #145
JesseM said:
I think Gordon may still be unclear on the basic notion of the EPR/Bell argument involving predetermined values, so I want to elaborate a bit on DrChinese's answer here. ± 1 are indeed assigned to outcomes, but EPR/Bell assume that if by measuring particle #1 we can perfectly predict what the outcome will be if a given type of measurement is made on particle #2 far away (and at a spacelike separation), then in a local realistic theory there must have been some local "elements of reality" associated with particle #2 beforehand that predetermined what the outcome of such a measurement would be. It's not that the particle already has a property equal to that outcome, since the act of measurement could very well alter the particle's properties: for example, even if a particle was predetermined to show momentum p if its momentum was measured (and we can predict this by measuring the momentum of its entangled twin and invoking the rule that momentum is always measured to be conserved), its "hidden" momentum prior to measurement could have been some different value p', but it must in that case have had some local hidden variables that ensured that if a momentum measurement were performed on it, its momentum would change to the value p and that would be the observed outcome of the measurement.

Einstein uses the analogy of a pair of boxes, such that whenever we open one, if we see a ball inside then it's guaranteed no ball will be found when the other is opened, and vice versa. Under local realism, one way to explain this is just to say the boxes each already had the property of "ball inside" or "no ball inside" beforehand. But as I discussed in [post=3270631]this post[/post], you could also come up with more complicated explanations where the "ball inside" property did not itself exist prior to measurement, but they boxes had other properties which predetermined whether you'd see a ball inside when opened or not:

Gordon, if you don't think that particles have local hidden variables that predetermine the results they will give when measured in this scenario (which I guess in terms of your #6 would mean that even given advance knowledge of "the orientation of each pristine (and entangled) particle's principal axis associated with total spin" for each particle, we couldn't say in advance with perfect certainty whether it would give +1 or -1 if the spin was measured at a given angle), does that mean you think there is a random element in what result they give? If so, as a local realist how can you explain the fact (in this scenario) that whenever both experimenters choose the same property to measure, they are guaranteed to get the same (or opposite) results? If there was a random element to the outcomes, and the random events in the neighborhood of one particle couldn't have a nonlocal influence on random events in the neighborhood of the other in the case of measurements at a spacelike separation, wouldn't that mean there would be some nonzero probability they would fail to give the same (or opposite) results? Think of the box scenario with the holographic projectors, but suppose each box also has something like a true random number generator that determines whether it will project an image of a ball or not--since the two random numbers picked are statistically independent, even if the odds are stacked so it's unlikely they would both project holographic balls (say the first box is programmed to randomly pick a number from 1-100 and project a ball if the number is anywhere from 1 to 99, while the second box also picks a number from 1-100 and only projects a ball if the result is 1), there's always going to be a nonzero probability they both will (in this case the probability both will project images of balls is (99/100)*(1/100)).

Thanks Jesse,

To put your mind at ease:

1. I believe that we live in a quantum world and that classical analogies are (consequently) often misleading. Bell's reference (even deference) to a classical analogy by d'Espagnat was, imho at the time, laughable. [Edit: See Bell, in Bertlmann's socks: "To explain this denouement without maths I cannot do better than follow d'Espagnat." Still laughable; even desperate, imho.]

2. There is no random element in the outcome you refer to.

3. Determinism is not a rude word with me.
 
Last edited:
  • #146
Gordon Watson said:
Thanks Jesse,

To put your mind at ease:

1. I believe that we live in a quantum world and that classical analogies are (consequently) often misleading. Bell's reference (even deference) to a classical analogy by d'Espagnat was, imho at the time, laughable. [Edit: See Bell, in Bertlmann's socks: "To explain this denouement without maths I cannot do better than follow d'Espagnat." Still laughable; even desperate, imho.]

2. There is no random element in the outcome you refer to.

3. Determinism is not a rude word with me.
So, do you think a hypothetical omniscient observer with complete knowledge of one particle's local properties (which might include the particle's own "orientation" in your model, but wouldn't include any information about the other particle) at some time prior to measurement (but after the past light cones of the two measurement regions have ceased to overlap, as in "region 3" of fig. 2 at the top of p. 3 in this paper) would be able to predict in advance with total certainty what outcome would be seen if the particle were measured at any of three detector settings? So that the observer could say something like "this particle's local properties ensure it is predetermined to give +1 if measured at angle a, -1 if measured at angle b, and -1 if measured at angle c"? And if both particles are always found to give identical observed results when measured at the same angle, would you agree this implies (under local realism) that for each pair emitted by the source, their local properties must be correlated in such a way as to ensure that one particle must have the same three predetermined results as the other one?
 
Last edited:
  • #147
SpectraCat said:
Bill,

I have read your Bell example carefully, and come to the conclusion that you are just using a lot of words to say, "you can never cross the same river twice". In other words, you are denying that experiments carried out on ensembles of identically prepared particles can give predictable results.
No. I'm not saying experiments carried out on ensembles can not give predictable results. Rather, I'm using a lot of words to say, "the average height of 100 people means something completely different from the average of a single person's height measured 100 times", both can give predictable results but are not necessarily the same. In more relevant terms, I am saying the three terms from an experiment or from QM can not be used simultaneously in a single inequality, since they represent alternate possibilities only one of which can ever be actual, whereas Bell's inequality is dealing with an abstract thought experiment in which all three are simultaneously available.

You are absolutely right that the 3 Bell distributions P(a,b), P(b,c) and P(a,c) will always be measured with different ensembles of particles ... but so what?
Bell's inequality derivation relies on the fact that they originate from THE SAME ensemble, therefore you cannot use three different ensembles and expect them to just work. If you think three different ensembles should work, you should start out from that assumption and derive the inequalities and show that you can still obtain them. However, many authors have done that, and obtained different inequalities which no experiment or QM has ever violated. See the articles I mentioned earlier.

Also, I notice that you never addressed my points about the probabilistic predictions made by QM being incompatible with CFD ... after all, you can never say anything about a probability distribution based on the results of a single measurement. As I understand CFD, it requires that you be able to make definite predictions about individual measurements. Is that not correct?
A definite prediction is one that is unambiguous. I guess if the experiment one single event then the result will be ambiguous, but the experiment in this case is not one event, it is a series, and the prediction from QM is not about a single event but an unambiguous expectation value. It does not matter anyhow because rejection of CFD is just a red-herring and does not address the main issues.

You mentioned a couple of times about predictions of expectation values, but that is exactly what I am saying .. expectation values are averages ... they cannot be determined by single experiments, but only by (large) sets of repeated experiments on the same system.
It will be interesting to know what you mean by system here. Since clearly each event is a different system.

Suggesting that the averages on the first 100 photons must be the same as the averages on the next 100 is similar to saying the average stock price of a stock for the first 100 days of the year must be the same as the average for the next 100 days of the year. It is easy to make that mistake if you are really thinking that you are measuring the same photon everytime, (or the same stock tick every time), which is impossible to do, so you just naively measure a different photon and hope that the averages are the same.

Given an ensemble of identically prepared entangled particles, and a pair of detectors (Alice and Bob) with 3 settings {a,b,c} then sufficiently large sets of measurements with [/b]identical[/b] settings will yield the following results:

1) if Alice and Bob set their detectors to a & b, respectively, they will measure the expectation value P(a,b)
2) if Alice and Bob set their detectors to a & c, respectively, they will measure the expectation value P(a,c)
3) if Alice and Bob set their detectors to b & c, respectively, they will measure the expectation value P(b,c)
Note that you are relying on the idea that everything is identical, presumably because you are hoping that the results will be equivalent from one photon to the next...
I do not think those statements are consistent with CFD, because I have laid out above (and previously) they are statements about ensembles, rather than discrete events. However, if I am wrong, and CFD predictions can be stated probabilistically, as opposed to definitely (which seems inconsistent with its name), then I suppose those statements would be consistent with CFD.
In my previous posts I have explained why your definition of CFD will not eliminate the conundrum so I will say here it doesn't matter, as the real issue is elsewhere.

I suppose you know about the triangle inequality which says for any triangle with sides labeled x, y, z where x, y, z represents the lengths of the sides

z <= x + y

Note that this inequality applies to a single triangle. What if you could only measure one side at a time. Assume that for each measurement you set the label of the side your instrument should measure and it measured the length destroying the triangle in the process. So you performed a large number of measurements on different triangles. Measuring <z> for the first run, <x> for the next and <y> for the next.

Do you believe the inequality
<z> <= <x> + <y>

Is valid? In other words, you believe it is legitimate to use those averages in your inequality to verify its validity?
 
  • #148
JesseM said:
So, do you think a hypothetical omniscient observer with complete knowledge of one particle's local properties (which might include the particle's own "orientation" in your model, but wouldn't include any information about the other particle) at some time prior to measurement (but after the past light cones of the two measurement regions have ceased to overlap, as in "region 3" of fig. 2 at the top of p. 3 in this paper) would be able to predict in advance with total certainty what outcome would be seen if the particle were measured at any of three detector settings? So that the observer could say something like "this particle's local properties ensure it is predetermined to give +1 if measured at angle a, -1 if measured at angle b, and -1 if measured at angle c"? And if both particles are always found to give identical observed results when measured at the same angle, would you agree this implies (under local realism) that for each pair emitted by the source, their local properties must be correlated in such a way as to ensure that one particle must have the same three predetermined results as the other one?

Sure; why not?
 
  • #149
Gordon Watson said:
Sure; why not?
Because this line of argument leads inevitably to Bell inequalities, as I and others have been trying to explain to you since you started posting here. Suppose we have some large number of particle pairs, from the above you should agree that in each pair, the two particles should have some definite set of predetermined results like [+ on a, - on b, + on c] or [- on a, +on b, + on c] etc.? And for any collection of things (like particle pairs) where each member of the collection either does or doesn't have each of three possible properties A, B, and C (say A=+ on angle a, B=+ on angle b, C=+ on angle c, so "not A"=- on angle a, "not B"=- on angle b, and "not C"=- on angle c), simple arithmetic shows the whole collection must satisfy this inequality:

Number(A, not B) + Number(B, not C) ≥ Number(A, not C)

There's a proof on this page, but I think their proof is not as simple as it could be, the simplest way of seeing it is this:

Number(A, not B) = Number(A, not B, C) + Number(A, not B, not C) [since any member of the group satisfying A, not B must either have or not have property C]
Number(B, not C) = Number(A, B, not C) + Number(not A, B, not C)
Number(A, not C) = Number(A, B, not C) + Number(A, not B, not C)

And plugging this into the above inequality and cancelling like terms from both sides gives:

Number(A, not B, C) + Number(not A, B, not C) ≥ 0

Which obviously must be true since the number with any given set of properties must be ≥ 0!

Anyway, whether you like my proof or the one on the page I linked to better, hopefully you agree that if we knew the complete set of three predetermined properties for a collection of particle pairs, the inequality Number(A, not B) + Number(B, not C) ≥ Number(A, not C) would be satisfied? If so, it's a short step from there to the statement that if you measure two properties for a large number of particle pairs, P(A, not B|measured a and b) + P(B, not C|measured b and c) ≥ P(A, not C|measured a and c) (basically the only extra assumption needed is that the probability the experimenters will pick a given pair of axes to measure is uncorrelated with the triplet of predetermined results prior to measurement). I discuss this more on post #11 here, but we can also discuss it here if you agree with the inequality Number(A, not B) + Number(B, not C) ≥ Number(A, not C) for all particle pairs but don't agree that for measurements this implies P(A, not B|measured a and b) + P(B, not C|measured b and c) ≥ P(A, not C|measured a and c).
 
  • #150
billschnieder said:
Bell's inequality derivation relies on the fact that they originate from THE SAME ensemble, therefore you cannot use three different ensembles and expect them to just work.
Well, you can if you have a single ensemble of particle pairs, and then for each pair you choose which combination of properties to measure using a rule that is statistically uncorrelated with the hidden properties of each pair--the "no-conspiracy assumption" which you seem to have forgotten about. Read my post #11 on this thread for more on this point, and consider the following part in particular:
If you disagree, think of it this way. Suppose we generate a hypothetical list of the predetermined values for each in a series of N trials, where N is fairly large, say N=100, like this:

trial #1: [A, B, C]
trial #2: [A, not-B, not-C]
trial #3: [not-A, B, not-C]
trial #4: [A, B, not-C]
...
trial #100: [A, not-B, not-C]

You can use any algorithm you want to generate this list, including one where you pick the values for each trial based on a probability distribution for all 8 possible combinations, and the probability distribution itself changes depending on the number of the trial (equivalent to De Raedt's notion that the probability distribution for lambda might be time-dependent). Anyway, once you have the list, then select which two the imaginary experimenters are going to sample using a rule that is random with respect to the actual set of predetermined values on that trial--for example, you could use this random number generator with Min=1 and Max=3, and then on each trial if it gives "1" you say that the measurement was a,b, if it gives "2" you say the measurement was b,c, and if it gives "3" you say the measurement was a,c. I would say that regardless of what algorithm you chose to generate the original list of predetermined values, the fact that the choice of which values were sampled on each trial was random ensures that if the number of entries N on the list is large, the probability is very small that you'll get a violation of the inequality above involving measured subsets. Would you disagree with that?
 
  • #151
billschnieder said:
[..]
Bell's inequality derivation relies on the fact that they originate from THE SAME ensemble, therefore you cannot use three different ensembles and expect them to just work. If you think three different ensembles should work, you should start out from that assumption and derive the inequalities and show that you can still obtain them. However, many authors have done that, and obtained different inequalities which no experiment or QM has ever violated. See the articles I mentioned earlier.
Ok then you appear to understand (and agree) with De Raedt et al's most recent paper on this issue. If so, please confirm it in the discussion thread about their paper. As I'm trying to understand the strengths and weaknesses of that argument, elaborations of that argument as formulated by them will be welcome. :smile:

Cheers,
Harald
 
Last edited:
  • #152
JesseM said:
Well, you can if you have a single ensemble of particle pairs, and then for each pair you choose which combination of properties to measure using a rule that is statistically uncorrelated with the hidden properties of each pair--the "no-conspiracy assumption" which you seem to have forgotten about. Read my post #11 on this thread for more on this point, and consider the following part in particular:
I simply ignore this argument which you keep repeating, because if far off the mark.
1) In any Bell test experiment, you are not dealing with a single ensemble but at least 3 different ensembles! Your argument here seems to be similar to saying, if the average price of a stock for the first 100 days of the year is different from the second 100 days of the year, then there must be conspiracy because we are dealing with a single stock (cf what you call single ensemble above).
2) You are confused if you think it is possible to randomly select based on hidden properties you know nothing about. Read up on Bertrands paradox for more on this point.
Try to understand my argument and you will see that this line of argument you are suggesting as rebuttal is definitely not. If you think you've understood my argument, summarize it in your own words. Then respond to this very simple analogy:

I suppose you know about the triangle inequality which says for any triangle with sides labeled x, y, z where x, y, z represents the lengths of the sides

z <= x + y

Note that this inequality applies to a single triangle. What if you could only measure one side at a time. Assume that for each measurement you set the label of the side your instrument should measure and it measured the length destroying the triangle in the process. So you performed a large number of measurements on different triangles. Measuring <z> for the first run, <x> for the next and <y> for the next.

Do you believe the inequality
<z> <= <x> + <y>

Is valid? In other words, you believe it is legitimate to use those averages in your inequality to verify its validity?
 
Last edited:
  • #153
harrylin said:
Ok then you appear to understand (and agree) with De Raedt et al's most recent paper on this issue. If so, please confirm it in the discussion thread about their paper. As I'm trying to understand the strengths and weaknesses of that argument, elaborations of that argument as formulated by them will be welcome. :smile:

Cheers,
Harald

You are right, my argument is exactly the same as theirs and the same as that of the other authors in the articles I mentioned earlier. I will summarize it in the thread you mention when I get some time.
 
  • #154
billschnieder said:
I simply ignore this argument which you keep repeating, because if far off the mark.
1) In any Bell test experiment, you are not dealing with a single ensemble but at least 3 different ensembles!
Probably it would be better to avoid the word "ensemble", since a statistical ensemble normally refers to a hypothetical collection of possible outcomes which may be much larger than the number actual sampled in any experiment. But we can consider a single list of particle pairs each of which is hypothetically assumed to be associated with a particular (unknown to us) set of three predetermined results for our three measurement settings (if we're dealing with the case where measurements at the same setting are guaranteed to yield identical results). Then it's true that your measurements divide this single list into multiple sub-lists like this list where you measured a,b or the list where you measured b,c, but as long as the no-conspiracy condition holds than the probability that the collection of sub-lists will violate an inequality not violated by the original list gets smaller and smaller as the number of entries on the list gets larger and larger.
billschnieder said:
2) You are confused if you think it is possible to randomly select based on hidden properties you know nothing about. Read up on Bertrands paradox for more on this point.
Bertrand's paradox has to do with the ambiguity in the phrase "random chord" since this phrase does not define any particular probability distribution on chords, what that has to do with this situation I don't know since it's assumed there is some specific physical procedure (say, a random number generator) for making the choice of what detector setting to use on each trial. Anyway, does your answer mean you accept that if the no-conspiracy condition holds (i.e. P(λ|a,b) = P(λ|b,c) and so on for other combinations of detector settings) that would mean the inequality becomes more and more likely to be true (under local realism) as the number of trials grows, but that you just don't think the no-conspiracy condition is one there are any good physical arguments for believing in? If so I would say you probably haven't thought it through very carefully.

For example, suppose Alice and Bob are a great distance apart, and suppose on each successive day they are each measuring one member of a particle pair that was emitted from a source between them more than a year ago. And suppose we are defining the "hidden variables" for each particle to be restricted to a cross-section of the past light cone of each measurement (like "region 3" in fig. 2 on p. 3 of this paper), at a time one year before the measurement was actually performed but after the time the past light cones of the two measurements had ceased to overlap (again see the diagram in that paper). Now suppose that one day before measurement, Alice and Bob make their decision about which measurement setting to use based on the behavior of some very chaotic system, like the weather that day or a chaotic pendulum, which according to the butterfly effect might be in a completely different state at that time (leading to a different choice of detector settings) if even one tiny condition were different anywhere in the past light cone of that moment one year earlier. In this case, it would be a physically bizarre situation indeed if over a hypothetical infinite set of trials of this type, there were some consistent correlation between the complete set of all physical conditions throughout the past-light-cone-cross-section (which all contribute to the behavior of the chaotic system) and the three predetermined results of the particle, which we would normally assume to depend only on some small subset of conditions in the same past-light-cone-cross-section (perhaps just variables associated with the spatial location of the particle itself, or its immediate neighborhood). I suppose you could imagine that the particle's behavior is itself deterministically chaotic so that even if it has predetermined results they, too, depend on the complete set of all physical conditions throughout the past light cone, but in this case the fact that the two particles always have the same result whenever the three experimenters pick the same measurement setting would itself be physically bizarre.
billschnieder said:
Try to understand my argument and you will see that this line of argument you are suggesting as rebuttal is definitely not. If you think you've understood my argument, summarize it in your own words.
Are you not arguing that the probability of getting a given value of lambda (or a given set of predetermined results) should not be assumed to be the same on trials where we pick one combination of measurement settings (say, a and b) as it is on trials where we pick a different combination (say, b and c)?
 
Last edited:
  • #155
JesseM said:
Because this line of argument leads inevitably to Bell inequalities, as I and others have been trying to explain to you since you started posting here. Suppose we have some large number of particle pairs, from the above you should agree that in each pair, the two particles should have some definite set of predetermined results like [+ on a, - on b, + on c] or [- on a, +on b, + on c] etc.? And for any collection of things (like particle pairs) where each member of the collection either does or doesn't have each of three possible properties A, B, and C (say A=+ on angle a, B=+ on angle b, C=+ on angle c, so "not A"=- on angle a, "not B"=- on angle b, and "not C"=- on angle c), simple arithmetic shows the whole collection must satisfy this inequality:

Number(A, not B) + Number(B, not C) ≥ Number(A, not C)

There's a proof on this page, but I think their proof is not as simple as it could be, the simplest way of seeing it is this:

Number(A, not B) = Number(A, not B, C) + Number(A, not B, not C) [since any member of the group satisfying A, not B must either have or not have property C]
Number(B, not C) = Number(A, B, not C) + Number(not A, B, not C)
Number(A, not C) = Number(A, B, not C) + Number(A, not B, not C)

And plugging this into the above inequality and cancelling like terms from both sides gives:

Number(A, not B, C) + Number(not A, B, not C) ≥ 0

Which obviously must be true since the number with any given set of properties must be ≥ 0!

Anyway, whether you like my proof or the one on the page I linked to better, hopefully you agree that if we knew the complete set of three predetermined properties for a collection of particle pairs, the inequality Number(A, not B) + Number(B, not C) ≥ Number(A, not C) would be satisfied? If so, it's a short step from there to the statement that if you measure two properties for a large number of particle pairs, P(A, not B|measured a and b) + P(B, not C|measured b and c) ≥ P(A, not C|measured a and c) (basically the only extra assumption needed is that the probability the experimenters will pick a given pair of axes to measure is uncorrelated with the triplet of predetermined results prior to measurement). I discuss this more on post #11 here, but we can also discuss it here if you agree with the inequality Number(A, not B) + Number(B, not C) ≥ Number(A, not C) for all particle pairs but don't agree that for measurements this implies P(A, not B|measured a and b) + P(B, not C|measured b and c) ≥ P(A, not C|measured a and c).

..
You will see from my Signature that my local-realism (L*R) is not of the type that you here specify. Indeed, it would be an odd LR that combines Einstein locality with extrinsic properties.

The intrinsic property that I identify as relevant (in my realism) is (simply) the orientation of the total spin.

So then, when I convert your "classical analogy" to the realism of L*R, we find something like:

P(A, ~B) = 1/2 sin^2 (a, b).

P(B, ~C) = 1/2 sin^2 (b, c).

P(A, ~C) = 1/2 sin^2 (a, c).

Thus, your classical construction cannot be maintained, let alone constructed. Or rather: Show me how these facts fit your classical analogy, please.

In short: You convert a possible "measurement" outcome (an extrinsic property) to an intrinsic property of the particle itself. As if I attributed "death" to you today, for (regrettably) you one day will be.

So when you say that "I should agree" -- I cannot!

Does the "death" analogy help you see the error in your mode of thinking; you not being a local-realist?
..
 
  • #156
Gordon Watson said:
..
You will see from my Signature that my local-realism (L*R) is not of the type that you here specify. Indeed, it would be an odd LR that combines Einstein locality with extrinsic properties.

The intrinsic property that I identify as relevant (in my realism) is (simply) the orientation of the total spin.
"Orientation of the total spin" is not something particles have a definite value for in QM prior to any spin measurement, so this must be some sort of hidden variable. But is it a local hidden variable? Does each particle have its own "orientation" value, or by "total spin" are you talking about both particles at once, and saying "orientation of the total spin" for both particles cannot be defined as a function of local variables associated with each particle? If your model includes nonlocal variables that can't be defined as functions of other local variables (in the way that magnetic flux through an extended surface can be defined as a function of the local magnetic field vector at each point on the surface), then you are not a "local realist" by any physicist's definition!
Gordon Watson said:
In short: You convert a possible "measurement" outcome (an extrinsic property) to an intrinsic property of the particle itself. As if I attributed "death" to you today, for (regrettably) you one day will be.
But I already told you that EPR/Bell were not saying the particle must already have a given property prior to measurement, just that it must have local properties that predetermine what property would be observed if that measurement was made (in a local realist model where measuring one particle allows you to predict with certainty what would be observed if the same measurement were performed on the other, that is). That was the point of my whole discussion of Einstein's two-box analogy in post #144 and how instead of the box already containing a ball or not, it might contain a holographic projector which was predetermined to either project or not project an image of a ball in response to the box being opened. Please read that post again if you're not clear on this point! Along the same lines, while it would obviously be incorrect to attribute the property of "death" to me now, it's conceivable that there are properties associated with my body and perhaps some section of the surrounding world at this time that predetermine whether a test for my death at some future date (say Jan 1. 2050) will yield the result "still alive" or "he's dead, Jim", so if someone knew all the relevant properties today they could predict with total certainty what the result of this Jan. 1 2050 test would be.

I asked you specifically about such predetermined results in post #146:
JesseM said:
So, do you think a hypothetical omniscient observer with complete knowledge of one particle's local properties (which might include the particle's own "orientation" in your model, but wouldn't include any information about the other particle) at some time prior to measurement (but after the past light cones of the two measurement regions have ceased to overlap, as in "region 3" of fig. 2 at the top of p. 3 in this paper) would be able to predict in advance with total certainty what outcome would be seen if the particle were measured at any of three detector settings? So that the observer could say something like "this particle's local properties ensure it is predetermined to give +1 if measured at angle a, -1 if measured at angle b, and -1 if measured at angle c"? And if both particles are always found to give identical observed results when measured at the same angle, would you agree this implies (under local realism) that for each pair emitted by the source, their local properties must be correlated in such a way as to ensure that one particle must have the same three predetermined results as the other one?
Your short response was "Sure; why not?" Did you actually read my question carefully before responding or were you just being flippant? Do you wish to change your answer now? The comment is asking you whether there are local properties associated with one of the particles that predetermine what result it would give to each of the three possible detector settings, before the experimenter even makes the choice of what detector setting to use. If you agree that each particle has a well-defined set of predetermined answers like this (and that both members of every entangled pair have the same three predetermined answers), I don't see how you can deny that for every single particle, it either does or doesn't satisfy the three combinations of predetermined results (A, not B) and (B, not C) and (A, not C), meaning an observer with magical knowledge of hidden variables could count how many particles satisfy each one and get Number(A, not B) and Number(B, not C) and Number(A, not C) for any series of particle pairs.
 
Last edited:
  • #157
Gordon Watson said:
..

In short: You convert a possible "measurement" outcome (an extrinsic property) to an intrinsic property of the particle itself. As if I attributed "death" to you today, for (regrettably) you one day will be.

..

But measurement outcomes are the only things that can ever be known about quantum systems ... that is one of the postulates of QM. So, it seems to me the "intrinsic property" of the particle itself, if it doesn't correlate to a measurement outcome, must be a hidden variable. The only remaining question after that is whether it is a non-local hidden variable, in which case you have a "weird" interpretation (like Bohmian mechanics), but one that is consistent with Bell's Theorem. If the "intrinsic property" is a local variable, then you are in conflict with Bell's Theorem.

In post #141, you seem to suggest that "pristine lambda" is a local hidden variable which is perturbed by measurement. Correct me if I am wrong, but it seems your chief objection to Bell is the association of measurement outcomes with local hidden variables, is that correct? I would also like to know your answers to the following questions:

1) Does the value of "pristine lambda" uniquely predict the measurement outcome for any possible detector setting?

2) If the answer to the above is "yes", then how is that any different than associating "pristine lambda" with measurement outcomes for detector settings, as done by Bell?

To me, I don't see any difference between what you have put forth and what Bell assumes in his paper, but perhaps if you can answer the above two questions, I will understand your position better.
 
  • #158
SpectraCat said:
1) Does the value of "pristine lambda" uniquely predict the measurement outcome for any possible detector setting?
If by detector setting you mean the sum total of the microscopic state of the detector at the instant of measurement, then the answer is Yes. But if by detector setting you mean the angle which Alice and Bob choose, then the answer is No. Alice and Bob have control over the angle, but not over all the microscopic dynamic properties of the detector assembly which they don't even know about.

2) If the answer to the above is "yes", then how is that any different than associating "pristine lambda" with measurement outcomes for detector settings, as done by Bell?

Because the outcome is caused by both the complete state of the particle and the complete state of the detector. The only parameter which the experimenters have any detailed knowledge of and can control is the detector angle.

That is why I mentioned earlier that the experimenter think they are measuring

|P(a,b) - P(a,c)| <= P(b,c) + 1
But in fact they are measuring

|P(a1,b1) - P(a2,c2)| <= P(b3,c3) + 1

Where the additional numbers indicate that the complete microscopic of the system when P(a1,b1) was measured is not necessarily the same as the state when P(a2,c2) was measured etc. Simiply assuming that it must be the same, without justification is a fatal error. The second inequality above has never been proven by anybody as a valid inequality but this is what Bell proponents are using everyday to proclaim the demise of realism/locality.

I'm sure JesseM will respond that the states must all be the same in the experiment because the experimenters have measured a large number of photons randomly sampling all the possible different states ... etc. This argument is naive for 2 reasons:

1) Bertrand's Paradox shows that "random" does not mean much unless you have specified exactly how you are sampling the variable. In other words, you can only sample a variable randomly if you know in advance the behaviour of the variable. How then are experimenters ever going to be able to randomly sample dynamic microscopic properties which they know nothing of? How are they to know that they have used the correct method to "randomly sample" the hidden properties, if they do not know the exact nature of all the dynamic microscopic properties affecting the outcome?

2) The argument assumes that by averaging a large number of values you obtain a value close to the correct one. JesseM will call it the law of large numbers. What he will not tell you (maybe he doesn't know this), is that the law of large numbers does not apply to non-stationary systems. The stock market for example is local-realistic, deterministic and non-stationary. That is why the average stock price for the first 100 days is not the same as the average stock price for the next 100 days for example. Bell proponents make the additional unsubstantiated assumption that the system being measured in the experiment is stationary.
 
  • #159
billschnieder said:
1) Bertrand's Paradox shows that "random" does not mean much unless you have specified exactly how you are sampling the variable.
Yes, and as soon as you do specify a method then Bertrand's paradox doesn't apply. And I did suggest some possible methods, let me refresh your memory:
JesseM said:
Bertrand's paradox has to do with the ambiguity in the phrase "random chord" since this phrase does not define any particular probability distribution on chords, what that has to do with this situation I don't know since it's assumed there is some specific physical procedure (say, a random number generator) for making the choice of what detector setting to use on each trial.

...

Now suppose that one day before measurement, Alice and Bob make their decision about which measurement setting to use based on the behavior of some very chaotic system, like the weather that day or a chaotic pendulum, which according to the butterfly effect might be in a completely different state at that time (leading to a different choice of detector settings) if even one tiny condition were different anywhere in the past light cone of that moment one year earlier.
So there you have it, two perfectly good methods that could only be correlated with the hidden variables denoted by lambda (at a time prior to the decision about what setting to use, as I noted) if there was some extremely weird "conspiracy" between seemingly unrelated events.
billschnieder said:
2) The argument assumes that by averaging a large number of values you obtain a value close to the correct one. JesseM will call it the law of large numbers. What he will not tell you (maybe he doesn't know this), is that the law of large numbers does not apply to non-stationary systems.
Of course it does, silly. Give me a list of the values of 3 stocks on a large number of successive days, say 1000. On each day, if the triplet has property "A" that means stock #1 is above a certain value (say, its average value over some previous period), if it has property "not A" that means it's below that value; likewise "B" means stock #2 is above some value on that day, "not B" means it's below, and "C" means stock #3 is above some value on that day, "not C" below. If I use a random number generator (or a chaotic pendulum) to decide which pair of stocks to measure that day, do you doubt that as the number of days gets large (again, say 1000), the probability of this inequality being violated would get very low?

Number(A, not B|a day where I measured stock 1 and stock 2) + Number(B, not C|a day where I measured stock 2 and stock 3) ≥ Number(A, not C|a day where I measured stock 1 and stock 3)

For a simple argument for why the time-dependence of stock values (or hidden variables) doesn't matter if my choice of which pair to sample has a time-independent probability that isn't correlated with the stock values/hidden variables, consider this point from [post=3307591]post 11 on the Boole vs. Bell thread[/post]:
Of course Bell does not actually assume that for a finite number of trials, exactly the same values of hidden variables occur on trials where a and b are sampled as on trials where b and c are sampled, only that the probability of a given value of lambda on a trial where the sample was a+b is the same as the probability of that value on a trial where the sample was a+c. And note that this does not exclude the notion that the probability of getting different hidden variable values could vary with time, but in that case if you knew the probability distribution for lambda at the actual times of measurement t1, t2, ... tN then you could construct a total probability distribution for lambda for a randomly selected measurement at one of those N times, and as long as the probability of choosing a+b vs. a+c or b+c was independent of the time of measurement (so for example the measurement at t2 was equally likely to be any of those three), then you can still derive the inequality.
billschnieder said:
The stock market for example is local-realistic, deterministic and non-stationary. That is why the average stock price for the first 100 days is not the same as the average stock price for the next 100 days for example. Bell proponents make the additional unsubstantiated assumption that the system being measured in the experiment is stationary.
Nah, they don't, that's just you inventing confused strawmen again.
 
Last edited:
  • #160
SpectraCat said:
But measurement outcomes are the only things that can ever be known about quantum systems ... that is one of the postulates of QM. So, it seems to me the "intrinsic property" of the particle itself, if it doesn't correlate to a measurement outcome, must be a hidden variable. The only remaining question after that is whether it is a non-local hidden variable, in which case you have a "weird" interpretation (like Bohmian mechanics), but one that is consistent with Bell's Theorem. If the "intrinsic property" is a local variable, then you are in conflict with Bell's Theorem.

In post #141, you seem to suggest that "pristine lambda" is a local hidden variable which is perturbed by measurement. Correct me if I am wrong, but it seems your chief objection to Bell is the association of measurement outcomes with local hidden variables, is that correct? I would also like to know your answers to the following questions:

1) Does the value of "pristine lambda" uniquely predict the measurement outcome for any possible detector setting?

2) If the answer to the above is "yes", then how is that any different than associating "pristine lambda" with measurement outcomes for detector settings, as done by Bell?

To me, I don't see any difference between what you have put forth and what Bell assumes in his paper, but perhaps if you can answer the above two questions, I will understand your position better.

Thanks for your interest in understanding my position.

A-1): Your wording here is not what I would use, but I suspect you mean this. Does the value of "pristine lambda" determine the measurement outcome for any possible detector setting. I say, YES.

A-2): There is no difference IF, by "association", you mean the result of a measurement is determined by "pristine" lambda" and its interaction ["association"] with a detector and its setting.

Hope this helps.
 
  • #161
JesseM said:
"Orientation of the total spin" is not something particles have a definite value for in QM prior to any spin measurement, so this must be some sort of hidden variable.
Of course.
JesseM said:
But is it a local hidden variable?
Of course.
JesseM said:
Does each particle have its own "orientation" value,
Of course.
JesseM said:
or by "total spin" are you talking about both particles at once, and saying "orientation of the total spin" for both particles cannot be defined as a function of local variables associated with each particle?
I'm talking one particle at a time; with total spin conserved in every pair; no two pairs the same.
JesseM said:
If your model includes nonlocal variables that can't be defined as functions of other local variables (in the way that magnetic flux through an extended surface can be defined as a function of the local magnetic field vector at each point on the surface), then you are not a "local realist" by any physicist's definition!
I'm a local realist in the purist sense: My local = Einstein local. My realism = Bell's realism.
JesseM said:
But I already told you that EPR/Bell were not saying the particle must already have a given property prior to measurement, just that it must have local properties that predetermine what property would be observed if that measurement was made (in a local realist model where measuring one particle allows you to predict with certainty what would be observed if the same measurement were performed on the other, that is). That was the point of my whole discussion of Einstein's two-box analogy in post #144 and how instead of the box already containing a ball or not, it might contain a holographic projector which was predetermined to either project or not project an image of a ball in response to the box being opened. Please read that post again if you're not clear on this point! Along the same lines, while it would obviously be incorrect to attribute the property of "death" to me now, it's conceivable that there are properties associated with my body and perhaps some section of the surrounding world at this time that predetermine whether a test for my death at some future date (say Jan 1. 2050) will yield the result "still alive" or "he's dead, Jim", so if someone knew all the relevant properties today they could predict with total certainty what the result of this Jan. 1 2050 test would be.
Classical examples are not relevant when discussing the fundamentals of our quantum world; see my earlier comment on this.
JesseM said:
I asked you specifically about such predetermined results in post #146:

Your short response was "Sure; why not?" Did you actually read my question carefully before responding or were you just being flippant? Do you wish to change your answer now? The comment is asking you whether there are local properties associated with one of the particles that predetermine what result it would give to each of the three possible detector settings, before the experimenter even makes the choice of what detector setting to use. If you agree that each particle has a well-defined set of predetermined answers like this (and that both members of every entangled pair have the same three predetermined answers), I don't see how you can deny that for every single particle, it either does or doesn't satisfy the three combinations of predetermined results (A, not B) and (B, not C) and (A, not C), meaning an observer with magical knowledge of hidden variables could count how many particles satisfy each one and get Number(A, not B) and Number(B, not C) and Number(A, not C) for any series of particle pairs.

My answer was serious: You defined the omniscient One and I defined Her potential.

Q-1: Is there any reason that you work with Numbers and not normalized distributions?

Also: I gave you HER distributions in post # 155 https://www.physicsforums.com/showpost.php?p=3309843&postcount=155

Q-2: Do you accept them?
 
Last edited:
  • #162
Gordon Watson said:
Classical examples are not relevant when discussing the fundamentals of our quantum world; see my earlier comment on this.
Then why did you say "My local = Einstein local"? The analogy of the two boxes was Einstein's own to explain what kind of local, objective theory he was looking for, see the quotations from his letter in [post=3270631]this post[/post].
Gordon Watson said:
My answer was serious: You defined the omniscient One and I defined Her potential.
Right, but just to clarify, you understood that when I said "omniscient" I just meant total perfect knowledge of all hidden variables at a given moment (without any measurements disturbing anything), along with knowledge of the laws of physics allowing for the most accurate possible prediction of the future given knowledge of that moment? If there is any fundamental indeterminism in the laws of nature, this "omniscient" being may not be able to predict with certainty what will happen in the future given knowledge of hidden variables in the present.

If we are in agreement on this, do you also agree that for each particle pair, this omniscient being's knowledge of each particle's hidden "orientation" at some time before the experimenters chose their detector settings would allow her to predict with certainty what result the particle would give for each of the three measurement settings? So if we define:

A=predetermined to give spin-up if measured at angle a in the future
not-A=predetermined to give spin-down if measured at angle a in the future
B=predetermined to give spin-up if measured at angle b in the future
not-B=predetermined to give spin-down if measured at angle b in the future
C=predetermined to give spin-up if measured at angle c in the future
not-C=predetermined to give spin-down if measured at angle c in the future

Then at this time before the detector setting was chosen, this omniscient being could use her knowledge of a particle's orientation to put it in some definite category like [A, not-B, C] or [not-A, B, C]? Please give me a clear yes-or-no answer to whether you agree that the "omniscient" being (as defined above) would be able to do this.

And if your answer to that is "yes", then I don't understand why you disagreed with any part of my [post=3308440]post 149[/post]! If you do answer "yes" to the above, then please go back to that earlier post and tell me specifically which statement of mine is the first you disagree with there (perhaps you misunderstood something I was saying in that post).
Gordon Watson said:
Q-1: Is there any reason that you work with Numbers and not normalized distributions?
Yes, because first considering numbers of particles with various possible predetermined results like [not-A, not-B, C] is an integral part of how I derive the conclusion that any local realist model where such predetermined results could be known by this type of omniscient being would be one where the probability distributions on measurement outcomes would respect Bell inequalities (so that in turn shows that if QM's probability distributions violate these inequalities, that proves QM's probabilities cannot be derived from such a local realist model).
Gordon Watson said:
Also: I gave you HER distributions in post # 155 https://www.physicsforums.com/showpost.php?p=3309843&postcount=155

Q-2: Do you accept them?
I don't accept that those probability distributions could possibly be consistent with "any local realist model where such predetermined results could be known by this type of omniscient being". You can have one but not the other. And I'm asking you to try to follow along the argument as to why a local realist model implies those probability distributions are impossible (given the assumption of perfect correlations whenever both are measured at the same angle), and carefully identify the first point in the argument that you disagree with.
 
Last edited:
  • #163
JesseM said:
Yes, and as soon as you do specify a method then Bertrand's paradox doesn't apply.
Wrong, Without know the properties by which you intent to randomize and how they behave, it is impossible to "specify a method". In other words, you can not perform an experiment in which a variable is screened-off, if you know nothing about the properties and behaviour of the variable. There's no conspiracy in that.

Take an example in which the price of a companies stock was dependent on the demand for it's product. But you had no idea that the variable "product demand" was a factor, in other words, it was hidden to you, and you had no idea about how "product demand" changed over time. Now tell me a method you will chose to sample the stock price such that "product demand" was screened-off.

The "methods" you claim to have provided is a joke.
JesseM said:
2) The argument assumes that by averaging a large number of values you obtain a value close to the correct one. JesseM will call it the law of large numbers. What he will not tell you (maybe he doesn't know this), is that the law of large numbers does not apply to non-stationary systems.

Of course it does, silly.
No it doesn't. Why don't you tell me what the expectation value for Apple's stock price (AAPL), einstein. You have ~30 years of minute by minute stock price data you can work from. You have no clue what you are talking about.

Read this paper http://arxiv.org/pdf/0804.0902, you will learn a thing or two from it.
Or this one: http://arxiv.org/abs/1001.4103v1
 
  • #164
I have been following this thread but not contributing because I have to admit, I am having some trouble following the arguments. I would like to ask some questions about a particular scenario, however, to help me understand some things. I wonder what would happen if we enclosed Alice, Bob, the detectors, and the entangled particle generator inside a Schroedinger catbox. Replace Alice and Bob each with a piece of radioactive material connected to a geiger counter, which then was rigged to choose one of the three detector orientations with, let's say, equal probability. In the Schroedinger cat experiment, it was easily rigged to yield a 50-50 chance of breaking the poison vial or not breaking the vial. I expect we could come up with a slightly more complicated mechanism to give a 1/3-1/3-1/3 probability of choosing detector A, B, or C. Suppose we know the wave function of the entire setup when we close the box, entangled particles not yet emitted. (I know, practically impossible). Now we propagate the wave function forward using an appropriate relativistic wave equation. After a time interval sufficient to be sure that the train of entangled particles have impinged on the detectors, what does the wave function look like?I'm quite sure that the wave function would have strong components which favored the expected QM statistical results in the "unboxed" scenario, violations much less so. I also think that, for these strong components, there would be an equal probability for each possible way that those statistics could be realized. In other words, if, in the subspace where the Alice-machine selects A and the Bob-machine selects B, and a hundred particles are measured, and the angle is such that we expect 99 percent (anti)coincidence, then the trains with a single coincidence will be most highly favored, and for each of these 100-particle trains with one coincidence, the probability that the coincidence will occur for the third particle is the same as the probability it will occur for the 37th particle, etc. In other words, the particular details of the well-designed detector will not be an issue, causing a particular particle to be more likely to be the odd one out.

I have two questions - 1) What do you think about the idea that these probabilities are equal and 2) what has happened to the concepts of local realism, superluminal effects, counterfactual definiteness, etc. in this scenario?
 
Last edited:
  • #165
Rap said:
... inside a Schroedinger catbox ...

The best part about a Schrodinger catbox is the poop is only there half the time! :rofl:
 
  • #166
Rap said:
I have been following this thread but not contributing because I have to admit, I am having some trouble following the arguments. I would like to ask some questions about a particular scenario, however, to help me understand some things. I wonder what would happen if we enclosed Alice, Bob, the detectors, and the entangled particle generator inside a Schroedinger catbox. Replace Alice and Bob each with a piece of radioactive material connected to a geiger counter, ...
Making the system more complicated will not help you to understand better.

Let me try to explain it another way:

For each *specific particle pair* in the experiment, God knows exactly what it will be obtained *if* Alice and Bob measured at angles (a,b). God also knows exactly what will be obtained *if* Alice and Bob instead measure at angles (b,c), *or* at angles (a,c). The real physical situation (hidden variables) of the particle pair exists and determines together with the chosen settings, the outcome of any of those measurements. All three *alternative results* at (a,b), (b,c), (a,c) will be generated by the same actual existing physical situation (hidden variables) of the particles. In other words, they are all *possible* outcomes from the deterministic interaction of the *hidden variables* with the detectors.

God then writes down an inequality using the three *possibilities* which MUST be obeyed.

|P(a,b) - P(a,c)| <= P(b,c) + 1

Note that of the three *possibilities*, only one can be actualized since *one pair* of photons can only be measured ONCE by Alice and Bob (ie, *if* Alice and Bob choose settings (a,b) and measure *the* photon pair, it becomes impossible for them to choose different settings and measure *the same* photon pair!). Remember that it is God who wrote the inequality, and he does not need to perform any experiments to obtain his *possibilities* in his inequality. The poor experimenter on the other hand is given an impossible task: "Perform three mutually exclusive measurements (a,b), (b,c), (a,c)". So he naively thinks to himself, let me measure (a,b) on particle pair 1 ie, (a1,b1), (b,c) on particle pair 2 ie (b2,c2), and (a,c) on particle pair 3 ie (a3,c3), hopefully all three particles are exactly the same and I should be able to substitute those results in God's inequality. Unfortunately for him, he obtains a violation. But being so foolish, he concludes that God's assumption that particles have pre-existing properties which determine the outcome, is false. He forgets to question his own faulty assumption that results from *different particle pairs* can be used in God's inequality.
 
  • #167
billschnieder said:
Making the system more complicated will not help you to understand better.

Yes, its more complicated, but the thing is, if it is a valid system to consider, and you do understand it, then I think all of the questions and their answers are contained in it.

billschnieder said:
Let me try to explain it another way:

(Explanation)

I understand your explanation, I understand that only one measurement is made so that only one statement is testably true and the other statements are rendered untestable by that measurement. I think it is premature to assume that hidden variables are controlling the situation.

If you consider the whole system inside a box with Alice and Bob replaced by the simple "random event generator" I mentioned, then the wave function at a sufficient time later will consist of a superposition of all possible outcomes. I am assuming that the wave function generated will give correct probabilities, with no hidden variables assumed, no superluminal effects, etc. This wave function contains all of "God's information" that is availiable to us about every possible outcome. When we open the box, the wavefunction collapses to an eigenstate, one of those outcomes. I think that by pondering that wave function, maybe some answers may be found.

For example, consider the subspace containing the Alice=A, Bob=B (AB) outcome, Alice is using detector orientation A, Bob is using B. Every possible pair of measurement results will each be represented by an eigenstate with a particular probability, but the ones that yield the answer we expect from QM will be much more likely. Now what about the Alice=A, Bob=C (AC) subspace? Same thing. For the Alice=A, Bob=A (AA) same thing, but more restrictive, only perfect anticorrelation strings have non-zero probability.

Now let's look at the Bell situation. Look at the AB subspace, pick an Alice-string and a Bob-string from the available possibilities. Look for the Bob-string being measured by Alice in the BB subspace, it's there and there and there is only one string that Bob measures and its perfectly anticorrelated with it. Look for the Alice-string being measured by Bob in the AA subspace, it's there, and the string measured by Alice is perfectly anticorrelated. Now look for the Bob-string being measured by Bob in the CB subspace (Alice is set to detector C). Its there, along with a bunch of possible strings that Alice might measure. But if you look for any of those strings that Alice might measure in the CA subspace, they will not be found.

Couldn't it be that this whole scenario is possible, without invoking hidden variables, non-local reality, superluminal speeds, etc., but rather follows from the direct application of unitary propagation to wave functions?
 
Last edited:
  • #168
billschnieder said:
Making the system more complicated will not help you to understand better.

Let me try to explain it another way:

For each *specific particle pair* in the experiment, God knows exactly what it will be obtained *if* Alice and Bob measured at angles (a,b). God also knows exactly what will be obtained *if* Alice and Bob instead measure at angles (b,c), *or* at angles (a,c). The real physical situation (hidden variables) of the particle pair exists and determines together with the chosen settings, the outcome of any of those measurements. All three *alternative results* at (a,b), (b,c), (a,c) will be generated by the same actual existing physical situation (hidden variables) of the particles. In other words, they are all *possible* outcomes from the deterministic interaction of the *hidden variables* with the detectors.

God then writes down an inequality using the three *possibilities* which MUST be obeyed.

|P(a,b) - P(a,c)| <= P(b,c) + 1

Note that of the three *possibilities*, only one can be actualized since *one pair* of photons can only be measured ONCE by Alice and Bob (ie, *if* Alice and Bob choose settings (a,b) and measure *the* photon pair, it becomes impossible for them to choose different settings and measure *the same* photon pair!). Remember that it is God who wrote the inequality, and he does not need to perform any experiments to obtain his *possibilities* in his inequality. The poor experimenter on the other hand is given an impossible task: "Perform three mutually exclusive measurements (a,b), (b,c), (a,c)". So he naively thinks to himself, let me measure (a,b) on particle pair 1 ie, (a1,b1), (b,c) on particle pair 2 ie (b2,c2), and (a,c) on particle pair 3 ie (a3,c3), hopefully all three particles are exactly the same and I should be able to substitute those results in God's inequality. Unfortunately for him, he obtains a violation. But being so foolish, he concludes that God's assumption that particles have pre-existing properties which determine the outcome, is false. He forgets to question his own faulty assumption that results from *different particle pairs* can be used in God's inequality.

So basically, QM is wrong (since the average value of correlation where theta=120 is 1/3 rather than 1/4) but God presents us with evidence to trick us into believing 1/4 is correct.

Wow, what an interesting and useful scientific hypothesis. Oh, did I leave out that it not falsifiable either?

P.S. I have offered your little gem myself several times here (in jest of course) because it can also be used to "disprove" any and all physical conclusions (General Relativity, all QM, etc.). There is nothing special whatsoever able applying this to Bell. I would hope this is obvious to all.
 
  • #169
Rap said:
I understand your explanation, I understand that only one measurement is made so that only one statement is testably true and the other statements are rendered untestable by that measurement. I think it is premature to assume that hidden variables are controlling the situation.
Nobody is assuming that hidden variables must be controlling the situation, all I am saying that proclamations of the demise of local hidden variables are naive and premature. In other words, the results are consistent with hidden variables. And the apparent paradox is really just due to faulty logic.

If you consider the whole system inside a box with Alice and Bob replaced by the simple "random event generator" I mentioned, then the wave function at a sufficient time later will consist of a superposition of all possible outcomes. I am assuming that the wave function generated will give correct probabilities, with no hidden variables assumed, no superluminal effects, etc. This wave function contains all of "God's information" that is availiable to us about every possible outcome. When we open the box, the wavefunction collapses to an eigenstate, one of those outcomes. I think that by pondering that wave function, maybe some answers may be found.

But that is exactly the answers I have been giving you. You seem to think of QM as an ontological physical theory which it is not. QM is an information theory very similar to Probability theory. QM manipulates information in a consistent way to prevent you from making silly mistakes, just like Probability theory. You run into problems when you start assigning ontological status to the entities within the mathematical framework. The wave function simply encapsulates the information about all the possibilities I mentioned, and what you call "collapse" is what happened when one measurement has been performed such that the others become "impossible". What QM is telling you in you is simply the same thing I told you, just less straight forward.

Couldn't it be that this whole scenario is possible, without invoking hidden variables, non-local reality, superluminal speeds, etc., but rather follows from the direct application of unitary propagation to wave functions?
What scenario, QM does not describe any ontological scenario here. You appear to be assigning ontological status to abstract mathematical constructs such as "unitary propagation" and "wave functions" which might be the real obstacle here. To give you a simple analogy:

Say you are heading out to the shop to buy some groceries, and you have in your mind what you plan to buy, but it is not a simple list, since some of the items will only be bought if you find other items. For example you might have in your mind to buy bananas if you also find peanuts, other wise you will buy apples. This is all in your head, there are different possiblities. It is information, and has no ontological status. This is your "wavefunction". You go to the shop and find peanuts, your wavefunction is now collapsed. What you are suggesting above is like saying there is an existing "thing" which propagates unitarily to give you the result of what you bought in the shop. If this is how you think, you should read this paper for some enlightenment:

http://bayes.wustl.edu/etj/articles/prob.in.qm.pdf
Jaynes, E. T., 1990, `Probability in Quantum Theory,' in Complexity, Entropy, and the Physics of Information, W. H. Zurek (ed.), Addison-Wesley, Redwood City, CA, p. 381;
 
  • #170
billschnieder said:
You seem to think of QM as an ontological physical theory which it is not.

I absolutely agree with you, the wave function is not ontological, it is an encapsulation of our knowledge gained from measurements. Unitary propagation tells us the probability of a certain outcome being obtained from a certain measurement made at some point in the future.

But in the scenario I outlined, there was never any implication that it was anything else. We make a (practically impossible) measurement of the wave function before the detectors are chosen by the radioactive material-geiger counter-simple machine (thus removing the vastly more complicated humans Alice and Bob and all implications of "free choice") and before the string of entangled particles are emitted. This wave function encapsulates our knowledge before the box is closed. Then we propagate that wave function. What we end up with after a time long enough for the particles to have been emitted and measured, is a wave function which assigns a probability for every possible outcome we might observe upon opening the box. e.g. "Alice" (the machine) chose detector A, "Bob" chose B, and the strings are [1,1,-1,1,-1...] and [-1,1,1,-1,-1,...]. That's one possibility and the wave function tells us the probability of that happening. If we want to ask "what if Bob chose detector C?", that answer is in there, and I think the answer to all such questions are in there, and they will violate the Bell inequalities. The wave function will give no answer to "what is the probability of simultaneous strings S1, S2, S3" because there is no such measurement. In other words, I think QM will give the correct answer without recourse to hidden variables, superluminal speeds, etc. Assuming that a probability can be assigned to three strings at a time amounts to counterfactual definiteness, and since the wave function is silent on such a probability, I think rejection of CFD is in order. I have read a number of the papers you have referred to, and I guess I am open to the possibility of hidden variables while maintaining the violation of Bell inequalities, but I am not sure how they would fit in with the above scenario.
 
  • #171
Rap said:
The wave function will give no answer to "what is the probability of simultaneous strings S1, S2, S3" because there is no such measurement.
Correct, because it is a nonsensical question. Just like asking "what is a square circle". QM has built-mechanism which prevent you from asking silly questions like that. That is why it works so well.

In other words, I think QM will give the correct answer without recourse to hidden variables, superluminal speeds, etc.
I agree, QM does not need hidden variables to be able to perform its functions as it currently does. But this is not the same as saying the underlying physical processes could not be described using hidden variables. QM in it's current state, is unable to predict single event outcomes. It doesn't mean the physics of single events can not be described by hidden variables. The problem arises when you place QM on a pedestal and worship it as the be-all and end-all theory which it is not, and then conclude from that that anything which is not required in QM, is not permitted in nature.

Assuming that a probability can be assigned to three strings at a time amounts to counterfactual definiteness
This is wrong. As I explained earlier during this thread. CFD does not mean you allow three mutually exclusive statements to be true simultaneously since this will be so nonsensical nobody will ever in his right mind advocate for CFD. CFD simply means you speak definitely of outcomes which are no longer possible. For example, the following two statements:

1) If A is true then X is false.
2) If A is false then X is true.

CFD doesn't mean "X is false, and X is true" -- this is a nonsensical statement. CFD means that we speaking meaningfully and unambiguously about both statements (1) and (2) which can be simultaneously true in their complete states, with their conditioning statements in place, even though only one of them is *actual*. Once you disect them out, you are dealing with nonsense not CFD.

In other words, I am saying CFD is not as nonsensical as you make it look because in that strawman form, nobody with more than a braincell has ever advocated such a thing.
and since the wave function is silent on such a probability, I think rejection of CFD is in order.
I can also say, there is no such thing as a "square circle" therefore a rejection of "square circles" is in order. But I haven't said anything meaningful. Try to understand this instead of looking around for stuff to *reject*.

I have read a number of the papers you have referred to, and I guess I am open to the possibility of hidden variables while maintaining the violation of Bell inequalities, but I am not sure how they would fit in with the above scenario.
There is nothing special about bell inequalities. Boole had derived them 100 years before Bell. Their violation or non-violation should not have some special status. Rather, you should ask what the inequalities represent and what their violation means for the specific case at issue.

In the EPR case, the inequalities are relationships between three simultaneously *actual* variables from the same system since they are derived from the perspective of an omniscient being who is aware of those *actual* variables.

The expectation values from QM and experiments are therefore not compatible since those *actual* variables can not all be measured simultaneously. This is the cause of the violation and it is not clear to me what else you are looking for.

Take any situation in which you have an inequality with *actual* variables and experiments in which one of those *actual* variables can not be measured and you will obtain a violation. And the violation will not mean you have to reject the existence of those *actual* variable. All it will mean is that you can not use *mutually exclusive possibilities* in an expression which expects *simultaneous possibilities* or *actualities*. I have posted one recently concerning the triangle inequality and the OP posted one concerning coin tosses. It really is that simple, if you will get over the yearning need to reject some classical concept.

here is the triangle inequality example again:

I suppose you know about the triangle inequality which says for any triangle with sides labeled X,Y,Z where x, y, z represents the lengths of the sides

z <= x + y

Note that this inequality applies to a single triangle. What if you could only measure one side at a time. Assume that for each measurement you set the label of the side your instrument should measure and it measured the length destroying the triangle in the process. So you performed a large number of measurements on different triangles. Measuring <z> for the first run, <x> for the next and <y> for the next.

Do you believe the inequality
<z> <= <x> + <y>

Is valid? In other words, you believe it is legitimate to use those averages in your inequality to verify its validity?


Please answer this last question, so I know that you understand this issues here.
 
Last edited:
  • #172
Coin Toss Simualtion

It is my intention to provide a simple coin toss simulation to clearly demonstrate that the derivation by Sakurai (http://en.wikipedia.org/wiki/Sakurai%27s_Bell_inequality) is invalid and that Bell’s inequality is pointless. The coin toss experiment consists of n=100 trials of randomly tossing three coins labeled a, b, c in which Alice randomly chooses a coin and records the upper most face and Bob randomly chooses a coin and records the lower most face to guarantee that Alice and Bob will have opposite outcomes 100% of time when choosing the same coin.

Bell’s inequality can be derived by first adding Eq. (3) and Eq. (4) from the above website as follows:

(3) P(a+,c+) =P2 + P4
(4) P(c+,b+) = P3 + P7

The sum gives the following:

P(a+,c+) + P(c+,b+) = P2+ P4 + P3 + P7

But from Sakurai’s Eq. (2): P(a+,b+) = P3 + P4, substituting gives:

P(a+,c+) + P(c+,b+) = P(a+,b+) + P2+ P7 and because probabilities are nonnegative then one can write the inequality as follows: P(a+,c+) + P(c+,b+) ≥ P(a+,b+).

It is easy to test the validity of the above equation that leads to the Bell inequality using the coin tossing experimental data. From the website P2 = abc(++-/--+) and P7 = abc(--+/++-). Let the plus sign represent a head and the negative sign represent a tail and instead of probability use the number or outcomes, n which is more convenient and simpler to analyze. According to Sakurai the following equation should be true for the coin tossing experiment where the outcomes are heads and tails.

nac(HH) + ncb(HH) = nab(HH) + nabc(HHT/TTH) + nabc(TTH/HHT)

The n = 100 trials of the coin tossing is given as a spread sheet and can be viewed as a web page at http://www.atomgeometry.com/EPRB_Coin_Toss.htm . The following is a summary of the relevant data used to test the above equation.

nac(HH) = 3
ncb(HH) = 4
nab(HH) = 1
nabc(HHT/TTH) = 14
nabc(TTH/HHT) = 17

Which suggest that 3 + 4 = 1 + 14 + 17.This is obviously a false statement and remains false no matter how large you make n. It is reasonable to determine from this simple coin toss experiment that any inequality derived from this equation may be true or false depending on the specific outcomes. Regardless, knowing that the equation used to derive the inequality is false renders the inequality meaningless. Furthermore, it must be concluded that Bell’s inequality as derived says nothing about local hidden variables or nonlocality.
 
Last edited by a moderator:
  • #173
billschnieder said:
Correct, because it is a nonsensical question. Just like asking "what is a square circle". QM has built-mechanism which prevent you from asking silly questions like that. That is why it works so well.


I agree, QM does not need hidden variables to be able to perform its functions as it currently does. But this is not the same as saying the underlying physical processes could not be described using hidden variables. QM in it's current state, is unable to predict single event outcomes. It doesn't mean the physics of single events can not be described by hidden variables. The problem arises when you place QM on a pedestal and worship it as the be-all and end-all theory which it is not, and then conclude from that that anything which is not required in QM, is not permitted in nature.


This is wrong. As I explained earlier during this thread. CFD does not mean you allow three mutually exclusive statements to be true simultaneously since this will be so nonsensical nobody will ever in his right mind advocate for CFD. CFD simply means you speak definitely of outcomes which are no longer possible. For example, the following two statements:

1) If A is true then X is false.
2) If A is false then X is true.

CFD doesn't mean "X is false, and X is true" -- this is a nonsensical statement. CFD means that we speaking meaningfully and unambiguously about both statements (1) and (2) which can be simultaneously true in their complete states, with their conditioning statements in place, even though only one of them is *actual*. Once you disect them out, you are dealing with nonsense not CFD.

In other words, I am saying CFD is not as nonsensical as you make it look because in that strawman form, nobody with more than a braincell has ever advocated such a thing.



I can also say, there is no such thing as a "square circle" therefore a rejection of "square circles" is in order. But I haven't said anything meaningful. Try to understand this instead of looking around for stuff to *reject*.


There is nothing special about bell inequalities. Boole had derived them 100 years before Bell. Their violation or non-violation should not have some special status. Rather, you should ask what the inequalities represent and what their violation means for the specific case at issue.

In the EPR case, the inequalities are relationships between three simultaneously *actual* variables from the same system since they are derived from the perspective of an omniscient being who is aware of those *actual* variables.

The expectation values from QM and experiments are therefore not compatible since those *actual* variables can not all be measured simultaneously. This is the cause of the violation and it is not clear to me what else you are looking for.

Take any situation in which you have an inequality with *actual* variables and experiments in which one of those *actual* variables can not be measured and you will obtain a violation. And the violation will not mean you have to reject the existence of those *actual* variable. All it will mean is that you can not use *mutually exclusive possibilities* in an expression which expects *simultaneous possibilities* or *actualities*. I have posted one recently concerning the triangle inequality and the OP posted one concerning coin tosses. It really is that simple, if you will get over the yearning need to reject some classical concept.

here is the triangle inequality example again:

I suppose you know about the triangle inequality which says for any triangle with sides labeled X,Y,Z where x, y, z represents the lengths of the sides

z <= x + y

Note that this inequality applies to a single triangle. What if you could only measure one side at a time. Assume that for each measurement you set the label of the side your instrument should measure and it measured the length destroying the triangle in the process. So you performed a large number of measurements on different triangles. Measuring <z> for the first run, <x> for the next and <y> for the next.

Do you believe the inequality
<z> <= <x> + <y>

Is valid? In other words, you believe it is legitimate to use those averages in your inequality to verify its validity?


Please answer this last question, so I know that you understand this issues here.

Bill, great examples and thorough reasoning.
 
  • #174
rlduncan said:
Coin Toss Simualtion

It is my intention to provide a simple coin toss simulation to clearly demonstrate that the derivation by Sakurai (http://en.wikipedia.org/wiki/Sakurai%27s_Bell_inequality) is invalid and that Bell’s inequality is pointless. The coin toss experiment consists of n=100 trials of randomly tossing three coins labeled a, b, c in which Alice randomly chooses a coin and records the upper most face and Bob randomly chooses a coin and records the lower most face to guarantee that Alice and Bob will have opposite outcomes 100% of time when choosing the same coin.

Bell’s inequality can be derived by first adding Eq. (3) and Eq. (4) from the above website as follows:

(3) P(a+,c+) =P2 + P4
(4) P(c+,b+) = P3 + P7

The sum gives the following:

P(a+,c+) + P(c+,b+) = P2+ P4 + P3 + P7

But from Sakurai’s Eq. (2): P(a+,b+) = P3 + P4, substituting gives:

P(a+,c+) + P(c+,b+) = P(a+,b+) + P2+ P7 and because probabilities are nonnegative then one can write the inequality as follows: P(a+,c+) + P(c+,b+) ≥ P(a+,b+).

It is easy to test the validity of the above equation that leads to the Bell inequality using the coin tossing experimental data. From the website P2 = abc(++-/--+) and P7 = abc(--+/++-). Let the plus sign represent a head and the negative sign represent a tail and instead of probability use the number or outcomes, n which is more convenient and simpler to analyze.

All of the above looks fine, however you go off the rails at this point:
According to Sakurai the following equation should be true for the coin tossing experiment where the outcomes are heads and tails.

nac(HH) + ncb(HH) = nab(HH) + nabc(HHT/TTH) + nabc(TTH/HHT)

That is where you start to go wrong .. there is no problem with the equality as written, but you have to make sure you compare apples with apples. ( more below)

[EDIT: Actually, I take that back .. that last equality is simply wrong, if nab, nac and nbc refer to raw coincidence counts, as opposed to probabilities.]

The n = 100 trials of the coin tossing is given as a spread sheet and can be viewed as a web page at http://www.atomgeometry.com/EPRB_Coin_Toss.htm . The following is a summary of the relevant data used to test the above equation.

Here is the problem with your analysis:

The following values are for coincident measurements in that they are ONLY counted for cases where Alice and Bob have made particular choices AND observe particular values:

Note also that the measurements below DO conform to the inequality Sakurai actually wrote (modified for your notation): nab <= nac + nbc ---> 1 <= 3 + 4 (TRUE!)

nac(HH) = 3
ncb(HH) = 4
nab(HH) = 1

The values below are "objective" in that they are counted for EVERY case, whether or not Alice and Bob's choices resulted in them getting a coincident measurement for that case. You should have been clued into the fact that something was up since the values were so much higher than for the coincident measurements above.

nabc(HHT/TTH) = 14
nabc(TTH/HHT) = 17

If you count up P3 and P4 as "objective" measurements, you will find that your equality is exactly preserved ... it has to be, because the numbers on both sides are the same.

What Sakurai was saying was that IF you do enough coincident measurements, you will find that the probability of a particular coincident measurement (say nab) with respect to the pool of successful coincident measurements, will approach the summed probabilities of the particular objective measurements which give possible matches to that coincident measurement. In your case with the coin tosses, the objective probabilities for all of the specific outcomes are equal, so your inequality amounts to 0.25 <= 0.25 +0.25, which is obviously true.
 
Last edited by a moderator:
  • #175
SpectraCat said:
All of the above looks fine, however you go off the rails at this point:


That is where you start to go wrong .. there is no problem with the equality as written, but you have to make sure you compare apples with apples. ( more below)

[EDIT: Actually, I take that back .. that last equality is simply wrong, if nab, nac and nbc refer to raw coincidence counts, as opposed to probabilities.]

The reason the equation does not sum correctly is because of choosing two coins out of the three coins when tabulating the data. The equation is always valid when analyzing all three coins simultaneously.

For comparison, here is a second spread sheet for n=25 trials for a simultaneous 3-coin toss experiment:

http://www.atomgeometry.com/Simultaneous_Coin_Toss.htm .

Notice that the equation: nab(HT) + nbc(HT) = nac(HT) + nabc(HTH) + nabc(THT) is true, that is, 2 + 8 = 4 + 1 + 5. For this coin tossing experiment the equation is always true and the inequality derived from the equation is always true. This consistency is lacking in the EPRB coin tossing experiments. Which suggest the problem lies in choosing two of the three coin to analyze instead of analyzing all three coins simultaneously (IMHO).
 
Last edited by a moderator:

Similar threads

Replies
2
Views
4K
Replies
38
Views
6K
  • Quantum Physics
8
Replies
246
Views
37K
Back
Top