Questions about Bell: Answering Philosophical Difficulties

  • Thread starter Thread starter krimianl99
  • Start date Start date
  • Tags Tags
    Bell
Click For Summary
Bell's theorem challenges the assumptions of locality, superdeterminism, and objective reality in light of quantum mechanics, revealing contradictions with experimental results. The discussion emphasizes that proof by negation is problematic, as it relies on identifying all non-trivial assumptions, which is often impossible. Non-locality poses significant challenges for relativity, as any exception could undermine its foundational principles. The conversation also highlights the complexities of superdeterminism, suggesting it complicates statistical reasoning in scientific inquiry. Ultimately, the implications of Bell's findings raise profound questions about the nature of reality and the limits of scientific reasoning.
  • #31
RandallB said:
NO, you are still combining all these "random variables" as if they are determinate with respect to each other, I don’t concede that nor have you justified that IMO.

Let us think about it from another perspective using your “Lambda Text Table”.
You cannot just assume it contains only a single line of data that would be the equivalent of defining L_A in a dependent not independent variation of L_B. Instead define a table of several lines, 180 or more with the same data in each line but in different locations on that line.
Maybe something like:

(###, (.-,-X,-Y,-side1,side2,side3,side4,area1,area2,...))
(###, (-,-,-X,--Y,--side1--,side2,----side3,---side4,---area1,area2,...))
(###, (---X,---Y,---side1,side2,side3,side4,area1,area2,...))
(###, (-,-,-X,Y,side1,side2,side3,side4,area1,area2,...))
(###, (X,Y,side1,-,-,-side2,side3,side4,area1,area2,...))
(###, (X,-,-,-Y,side1,side2,side3,side4,area1,area2,...))
(###, (X,-,-,-Y,-,-,-side1,side2,side3,side4,area1,area2,...))
(

Although each line is now unique I will allow a common location for data ### as a Ref# (like 000 to 180) indicating a source variable. We can call it a source angle but not a polarization angle that would be derived from the other data.
Plus each observing function A, B, or C that Alice and Bob use are randomly selected from 180 different possible functions. With each of them only capable of reading a single line from the Lambda Table, call it a limitation of their Observation Angle #. Define that OA# also as a number from 001 to 180. Now the Source Ref# is a random value to the Observation and must be used by the observing function (say in coordination with its own OA#) to select from 180 different random extraction functions correctly extract the data embedded in the one line of Lambda data visible from that observation angle to correctly extract that random and independent information.
Ok, but you still have to make it such, that whenever Alice pushes A and bob pushes A, they get always the same result. And same for B and C. How are you imposing this condition ? I don't see how you built this in. What we are interested in is not the precise format of Lambda, but of its EFFECTS (in terms of red/green when Alice and bob push a button). What's the effect of your table ?

It is from THIS condition that the 8 possibilities follow. We can only consider 8 different possible EFFECTS. Hence we can group all the different Lambdas in 8 groups.
Now I can accept that you (and Bell) can reasonably take some mathematical liberties to simplify the problem as much as possible to a set of binary (0 or 1) variables. I’ll even accept that IF you can show D(X,Y,Lambda) a reasonable and responsible reduction of the problem.
Then 23 = 8 Is a fair conclusion.

But I am satisfied that my above interpretations of a Local and Realistic possibility for your Lambda Table application. Means that if I write D(X,Y,Lambda), but in a binary analyses it actually does mean at least D(X,Y,L_A,L_B) as you say is a minimum reduction.
Then 24 = 16 and the prove against LR is not rigorously complete.

What are those 16 possibilities ? Mind you that I'm not talking about the number of possible Lambdas (they can run in the gazillions), I'm talking about the different possible effects they can have at Alice and at Bob. I can only find 8, which I listed before.

Can you give me the list of 16 different possibilities (that is, the 16 different pairs of probabilities at Alice and Bob to have red), in such a way that we still obtain that each time they push the same button, they get always the same result ?
 
Last edited:
Physics news on Phys.org
  • #32
ThomasT said:
Or, is it that there is a statistical dependence between coincidental detections and associated angular differences between polarizers (that's the only way that I've seen the correlations mapped)?

Yes. Of course. Event per event.

The spatially separated data streams aren't paired randomly, are they? That is, they're determined by, eg., coincidence circuitry; so that a detection at one end severely limits the sample space at the other end.

This is true in *experiments*. That is because experiments can only approximate "ideal EPR situations", and the most severe problem is the low efficiency of light detectors, as well as with the sources of entangled photons.

But we are not talking here about experiments (which are supposed to confirm/falsify quantum predictions), we are talking here about purely formal things. We can imagine sending out, at well-known times, a single pair of entangled systems, whether these be electrons, neutrons, photons, baseballs or whatever. This is theoretically possible within the frame of quantum theory. Whether there exists an experimental technique to realize this in the lab is a different matter of course, but it is in principle possible to have such states. We analyse the statistical predictions of quantum mechanics for these states, and see that they don't comply to what one would expect under the Bell conditions.

There seem to be at least two assumptions made by the quantum model builders. (1) The paired detection attributes had a common (emission) cause; ie., paired detection attributes are associated with filtration events associated with, eg. optical, disturbances that emerged from, eg., the same atomic transition. (2) The filters are analyzing/filtering the same property or properties of the commonly caused incident disturbances -- the precise physical nature of these incident disturbances and their properties being necessarily unknown (ie., unknowable).

Well, not really. You don't NEED to assume a common cause, but it would the a priori the most practical way to EXPLAIN perfect anti-correlations in the case of identical settings. It would be even more puzzling if INDEPENDENT random events at each side would give perfect anti-correlations, wouldn't it ?

This is what some people in this thread seem to forget: we start from the premise that we find PERFECT CORRELATIONS in the case of identical settings on both side. These perfect correlations are already a prediction of quantum mechanics. This is experimentally verified BTW. But the question is: are these correlations a problem in itself ? STARTING from these perfect correlations, would you take as an assumption that the observed things are independent, or have somehow a common origin ? If you start by saying they are independent, you ALREADY have a big problem: how come they are perfectly correlated ?
So you can "delay" the surprise by saying that the perfect correlation is the result of a common cause (the Lambda in the explanation). Of course if there is a common cause, it is possible to have perfect anti-correlations. If there is no common cause, we are ALREADY in troubles, no ?

But if we now analyze what it means, that the outcomes are determined by a common cause, well then the surprise hits us a bit later, because it implies relations between the OTHER correlations (when Alice and Bob don't make the same measurement).
 
  • #33
vanesch said:
If you take on the stance that negative reasoning (reductio ad absurdum) cannot be used, then all of science according to Popper falls down. You will never be able to falsify a statement, because you could always be falsifying just a triviality which you failed to notice (like, your experimental apparatus wasn't connected properly).

Not all proofs hinge upon negative reasoning, and there are a class of mathematicians who dispute the law of the excluded middle: That is that something must be true or false and that a proof of not false is a-priori equivalent to a proof of true. This class of mathematicians argues that only constructive proofs are valid -- that is proofs that by construction lead to a proof that something is true. Non-constructive proofs are to them problematic because they claim to prove a fact true, while giving no means of deriving that truth, except by basing ones argument on an initial falsehood.

Constructive proofs trump non-constructive proofs, so finding a constructive proof for something only previously believed true as the consequence of non-constructive proofs is a valid and important exercise when possible.
 
  • #34
vanesch said:
But in any case, the kind of non-locality needed in Bell's type of setups is not going to be definable over a spacetime manifold, simply because it would allow you, in principle, to make "kill your grandpa" setups. Not with a Bell type system in itself, but with the non-locality required by this kind of Bell system, if the explanation is to be non-locality.


The possibility that quantum waves can travel backwards is being explored by:

http://faculty.washington.edu/jcramer/NLS/NL_signal.htm

It is also possible that light can be persuaded to travel backwards in time. The light peak shown in the following video, exits some distance from where it enters a light conveying medium, before that peak has even arrived at that medium. It superficially at least appears to be moving backwards in time at a velocity of ~ -2c. Stacking such light conveying mediums in series, might allow the peak of a light pulse to be transmitted an arbitrary distance in zero time, thus sending a signal backwards in time.

http://www.rochester.edu/news/show.php?id=2544

The presumption that nature abhors a paradox is somewhat anthropomorphic. It is rather we who abhor paradoxes, which once known to exist we then find means of explaining, which ends up convincing us that these paradoxes were not in fact paradoxes in the first place, but rather a fault in our initial understanding of them.

There is a risk in saying that because X is impossible, it therefore cannot happen. As a scientist one is better advised to believe that the impossible can happen, and then try to find out how to make the impossible happen.

Unfortunately as Lee Smolin in "Trouble with Physics" points out and as Prof. Cramer discovered, it is hard to get funding to research anything that is not deemed worth researching. Most funders do not believe that researching magic will provide any useful return on investment, and so prefer to fund research into increasing the body of what is known about the already known, rather than research that offers some vanishingly small hope of reducing the unknown in the completely unknown.
 
  • #35
Ian Davis said:
Not all proofs hinge upon negative reasoning, and there are a class of mathematicians who dispute the law of the excluded middle: That is that something must be true or false and that a proof of not false is a-priori equivalent to a proof of true. This class of mathematicians argues that only constructive proofs are valid -- that is proofs that by construction lead to a proof that something is true. Non-constructive proofs are to them problematic because they claim to prove a fact true, while giving no means of deriving that truth, except by basing ones argument on an initial falsehood.

Constructive proofs trump non-constructive proofs, so finding a constructive proof for something only previously believed true as the consequence of non-constructive proofs is a valid and important exercise when possible.

Yes. But I have two caveats here: first of all, constructivist mathematics http://en.wikipedia.org/wiki/Constructivism_(mathematics)
is pretty limited ; I guess most physical theories (even Newtonian mechanics) would be crippled if we limited them to the use of constructivist mathematics.

So usually physicists don't bother with such mathematical/logical hair splitting. Gash, physicists don't even have the same standards for mathematical rigor as normal mathematicians!

The second point I already raised: Bell's theorem is NOT really based upon reductio ad absurdum, if you limit yourself to the Bell inequalities. They are DERIVED from a number of assumptions. The "reductio ad absurdum" only comes in in the last step: visibly, quantum mechanical predictions don't satisfy these inequalities.

That's as simple as, say, Schwarz' inequality in plane geometry:
d(x,y) + d(y,z) >= d(x,z).

This means that if we have a triangle A,B,C, that we have:

AB + BC >= AC
BC + AC >= AB
AB + AC >= AC

as we can apply Schwarz' inequality for the 3 points in any order.

Now, imagine I give you 3 iron bars, 1 of 1 meter long, 1 of 2 meter long, and 1 of 50 meters long, and I ask you if it is possible to construct a triangle with them.

You can do this by trying all possible orders of 1m, 2m and 50m for AB, AC, and BC, and figure out if you satisfy, for a given case, all 3 inequalities above. I won't do it here explicitly, but it's going to be obvious that it won't work out.

So, can I now conclude, yes, or no, that with a bar of 1m, a bar of 2m and a bar of 50m, I won't be able (in an Euclidean space) to make a triangle ?
 
  • #36
Ian Davis said:
The possibility that quantum waves can travel backwards is being explored by:

http://faculty.washington.edu/jcramer/NLS/NL_signal.htm

It is also possible that light can be persuaded to travel backwards in time. The light peak shown in the following video, exits some distance from where it enters a light conveying medium, before that peak has even arrived at that medium. It superficially at least appears to be moving backwards in time at a velocity of ~ -2c. Stacking such light conveying mediums in series, might allow the peak of a light pulse to be transmitted an arbitrary distance in zero time, thus sending a signal backwards in time.

It is pretty obvious that one CAN'T signal faster-than-light in quantum theory. You can prove this mathematically, at least as long as all interactions are lorentz-invariant. But this is Cramer, with his transactional interpretation, which sees backward-in-time justifications everywhere, even in trivial optical experiments like Afshar's. Look at the references: it isn't particularly PRL or something.


Now, this seems to me a particularly misleading exposition, because the way it is represented, it would be EASY to signal faster-than-light: if the pulse exits an arbitrary long fibre even before it enters, or even before we LET IT ENTER, then it would be sufficient to show that it exits even if we don't let it enter :smile:

But if you read the article carefully, you see that there is ACTUALLY already part of the pulse inside, which is not shown on the animation, which does suggest faster-than-light (backward in time) transmission.

If the guy says that *theory predicts such a behaviour* then for sure it is not backwards in time, or FTL, as this can be proven in QED.

The presumption that nature abhors a paradox is somewhat anthropomorphic. It is rather we who abhor paradoxes, which once known to exist we then find means of explaining, which ends up convincing us that these paradoxes were not in fact paradoxes in the first place, but rather a fault in our initial understanding of them.

The last sentence is correct: paradoxes are only misunderstandings (by the whole scientific community, or just by some individuals who don't understand the explanation). And btw, Bell's theorem is NOT a paradox at all.

There is a risk in saying that because X is impossible, it therefore cannot happen. As a scientist one is better advised to believe that the impossible can happen, and then try to find out how to make the impossible happen.

You can never say anything for sure about nature. But you can say things for sure about a THEORY: you can say that this or that theory will not allow this or that to happen.

You know, I asked funding to continue my research on human body levitation by thought only, and it is each time refused. I know that current theories don't allow this (especially that silly old theory of Newtonian gravity and so on), but one should let the impossible be researched. Think of the enormous advantage: we could all float to the office by just concentrating. Think of the use gain in fossil fuel emissions! If my research works out, I would solve one of the biggest problems of humanity! I should get huge grants and I don't receive anything!
 
Last edited:
  • #37
vanesch said:
Now, imagine I give you 3 iron bars, 1 of 1 meter long, 1 of 2 meter long, and 1 of 50 meters long, and I ask you if it is possible to construct a triangle with them.

You can do this by trying all possible orders of 1m, 2m and 50m for AB, AC, and BC, and figure out if you satisfy, for a given case, all 3 inequalities above. I won't do it here explicitly, but it's going to be obvious that it won't work out.

So, can I now conclude, yes, or no, that with a bar of 1m, a bar of 2m and a bar of 50m, I won't be able (in an Euclidean space) to make a triangle ?

Your example here is an interesting one in the case of Bell's theorem, because it demonstrates the way that assumptions can slide into ones thinking processes, and so cloud the truth of the matter before one. I personally would love to know whether a 30 light year distance, 40 light year distance and 50 light year distance set of rulers formed a triangle in the universe we live in irrespective of where we placed that rather large triangle. My lay reading on the matter suggests we should be much more suprised if it did than if it didn't because it is not predicted that everywhere in space we should discover space-time to be flat.

You are also using negative reasoning in the above example. You take a specific rather extreme example of trying to form a triangle with one side more than ten times the sum of the other two sides as an example of why one can't create triangles that violate cachy's inequality. To defend cachy's inequality using this approach you would have to consider all possible sets of three lengths and show that only those satisfying pythagoras's theorem formed triangles.. certainly an exercise invented by the terrible tedium (allusion to The Phantom Toll Booth). A wiser though perhaps equally ineffectual approach would be to look for one counter example, find it, and then announce done.

The classic case in point is Euclids postulate about parallel lines never meeting. People spent thousands of years trying to prove this conjecture from his other axioms, without any success whatsoever, and it was I think Riemann who found why it couldn't be done. It was his discovery which resulted in the notion of Euclidean space - a space people had played with for thousands of years -- but more importantly all those other types of spaces which none in those thousands of years had ever considered relevant to the equation at hand, or thought to use as vehicles for proving Euclids postulates as initially expressed false.

I readily agree that Bells theorem says that statistically given enough measurements X is impossible and that those exploring quantum mechanics see X happening. But that to me is what makes both Bells theorem and the quantum mechanical observations so interesting. I do not personally imagine that because things seem contradictory to me they are indeed in any real sense paradoxical. What I rather imagine is that all would be made plain to me if I just knew more.

But my initial commentary was limited to your claim that nothing useful could be discovered without using negative reasoning. Give me rulers long enough and a means of reading them, and I think I could prove the universe not flat, without recourse to negative reasoning. I think you would probably agree. Ergo, proof by contradiction :-)
 
Last edited:
  • #38
vanesch said:
You know, I asked funding to continue my research on human body levitation by thought only, and it is each time refused. I know that current theories don't allow this (especially that silly old theory of Newtonian gravity and so on), but one should let the impossible be researched. Think of the enormous advantage: we could all float to the office by just concentrating. Think of the use gain in fossil fuel emissions! If my research works out, I would solve one of the biggest problems of humanity! I should get huge grants and I don't receive anything!

I read a science fiction story as a child, that once read was never forgotten. It was about a government conspiracy to create the appearance that a brilliant but ficticious scientist had created an anti-gravity device only to die before he could get his results to print. The government fabricated this story, filmed the ficticious man using this device, constructed the ficticious mans residence and library, and filled this library with all the books they could imagine might be relevant to someone seeking to create an anti-gravity device, together with all sorts of quasi-theoretic ravings on random sheets of paper scattered wildly everywhere. They then implored the worlds greatest scientists as a matter of some urgency to visit the dead mans residence and duplicate this dead scientists astonishing feat.

Of course the story being fiction, it was a small step from there to have this subterfuge result in real tangable advances in the till then understand principles of gravity.

All of this was done to alter the perception in the minds of the remaining scientists, that what they had always imagined to be impossible, might in fact be possible. For until we believe that the impossible is possible, all things that currently seem impossible will be dismissed by the vast majority of scientists as not worth exploring. Believing something impossible is not the way to conduct research even for the few willing to explore the impossible. It creates a self fulfilling prophesy. Better to believe that self same thing possible, only to prove it otherwise by a later unanticipated proof of contradiction.

Great scientist throughout history are those that reformulated that which seemed impossible, to that which was the only possible. Impossible that one could distinguish gold from lead by placing the object in a bath and weighing it. Impossible that a cannon ball should fall at the same rate as a coin. Impossible that things should not when pushed later consequently stop moving. Impossible that time was other than a universally experienced thing. Impossible that the universe might have a beginning or an end. Impossible that the universe might be expanding at a more than linear rate given that gravity could only retard such expansion. Impossible that the pioneer space probes should not travel distances as predicted by the laws of force and gravity. Even in such absurd studies by Newton trying to turn lead into gold, it was not the study itself that was absurd but the attempt at producing the result, lacking the properly understanding of the mechanisms needed to turn hydrogen into gold.

I hope to live long enough to see the next impossible shattered. Thus I dream of impossible things. I think scientists should be more ready to listen to the advice (given I think by Sherlock Holmes) that when all things possible have been explored and found false, the one thing left, no matter how improbable, must be true.

Good luck getting that funding for your anti-gravity research.
 
Last edited:
  • #39
vanesch said:
Now, this seems to me a particularly misleading exposition, because the way it is represented, it would be EASY to signal faster-than-light: if the pulse exits an arbitrary long fibre even before it enters, or even before we LET IT ENTER, then it would be sufficient to show that it exits even if we don't let it enter :smile:

True, but I find just as intriguing the question as to which way that same pulse of light is traveling within the constructed medium. The explanation that somehow the tail of the signal contains all the necessary information to construct the resulting complex wave observed, and the coincidence that the back wave visually intersects precisely as it does with the entering wave, without in any way interferring with the arriving wave, seems to me a lot less intuitive than that the pulse on arriving at the front of the medium travels with a velocity of ~ -2c to the other end, and then exits. The number 2 of all numbers also seems strange. Why not 1. It seems a case where we are willing to defy occams razor in order to defend apriori beliefs. How much energy is packed into that one pulse of light, and how is this energy to be conserved when that one pulse visually becomes three. From where does that tail of the incoming signal derive the strength to form such a strong back signal. Is the signal fractal in the sense that within some small part is the description of the whole. Questions I can't answer not being a physicist, but still questions that trouble me with the standard explanations given, about it all being smoke and mirrors.

Likewise I find Feynmans suggestion that spontaneous construction and destruction of positron electron pairs is our world view is in reality electrons changing direction in time as consequence of absorbing/emitting a photon both intriguing and rather appealing.

It does seem that our reluctance to have things move other than forwards in time means that we must jump through hoops to explain why despite appearances things like light, electrons and signals cannot move backwards in time. My primary interest is in the question of time itself. I'm not well equipped to understand answers to this question, but it seems to me that time is the one question most demanding and deserving of serious thought by physicists even if that thought produces no subsequent answers.
 
Last edited:
  • #40
vanesch said:
You can in principle say that the absorbers didn't even exist (or didn't take their measurement positions) at the time of the emission of the pairs. So there must be a common cause in the past that dictates both: it can not be a direct effect of the existence of the absorbers on the source, right ?

Right.

But this means that that "cause in the past" will determine both the emission of pairs in the source, and whatever process we decided upon to set the absorbers. These can be, in principle, macroscopic processes, like throwing dice, "making up your mind", whatever.

Not exactly. That "cause in the past" will fix the position of a microscopic particle (the absorber). This will put a constraint on experimenter's freedom, but it is by no means the only constraint at work (energy conservation is another one). A better way of expressing this is to say that the decision must be in agreement with all physical laws, including the new proposed one. The experimenter cannot decide not to feel the force of gravity, to produce energy from nothing, etc. In the same way, he cannot decide to place an absorber where it is not supposed to be.

Imagine the experiment on a very large scale, with the arms light years long. So now we have a year to decide what button we are going to push. We could now use the same process of selection than the process that is used to decide if a patient in a medical test is going to get medecine A, medecine B or a placebo, at Alice to decide to push buttons A, B or C, and at Bob, we could look at the result of a set of patients (another one of course, of another test) whether it was the one with medecine A, medecine B or the one with the placebo who is now "most healed" to decide to push A, B or C there.

If there is a common cause in the past that influences the pushings of the buttons and the emitted pair, then there is hence a common cause in the past that decides as well what we give to a patient, and whether he gets better or not.

I am sorry, but I cannot follow your reasoning here. Anyway, what you need to show is that a generic medical test (not one explicitly based on an EPR experiment to make the decisions) is meaninglessl if superdeterminism is true.
 
  • #41
vanesch said:
Ok, but you still have to make it such, that whenever Alice pushes A and bob pushes A, they get always the same result. And same for B and C. How are you imposing this condition ? I don't see how you built this in. What we are interested in is not the precise format of Lambda, but of its EFFECTS (in terms of red/green when Alice and bob push a button). What's the effect of your table ?

It is from THIS condition that the 8 possibilities follow. We can only consider 8 different possible EFFECTS. Hence we can group all the different Lambdas in 8 groups.

What are those 16 possibilities ? Mind you that I'm not talking about the number of possible Lambdas (they can run in the gazillions), I'm talking about the different possible effects they can have at Alice and at Bob. I can only find 8, which I listed before.

Can you give me the list of 16 different possibilities (that is, the 16 different pairs of probabilities at Alice and Bob to have red), in such a way that we still obtain that each time they push the same button, they get always the same result ?

As soon as you said OK here you are acknowledging the possibility of D(1,2,3,4) rather than D(1,2,3) meaning four independent variables 16 possibilities not the 8 possibilities from only 3 independent variables.
I understand you only find 8 IF you use D(1,2,3); but to justify saying you “can only find 8” you must eliminate D(1,2,3,4) as the possible minimum solution. Nothing I’ve seen does that.


There are only four possible results (RR,RG,GR,GG). You cannot use the results of a fixed configuration, Alice pushes B and Bob pushes B giving (0,.4,.6,0); and then factor .4x.6 to get a .24 for RR, as you defined B;B the RR option already come up 0 times. And by your “OK” I think your recognizing that using the RG & GR probabilities here have no direct bearing on the odds for getting a RR observation. It requires a change in an independent variable like the function button selection made by Alice or another independent variable choice made by Bob to allow the possibility for a RR result.

All I’m saying is that the binary approach described here is not rigorously complete to justify it as proof against the Einstein LR claims when such a simple counter example can be provide.

Also important to note:
The counter example not only claims that Einstein LR may yet be correct, it also indicates the Neils Bohr claim of “Completeness” for Copenhagen HUP QM is still intact and not shown to wrong!

Remember that the Bohr claim was not that CQM was more complete than LR, but that it was complete and NO Other Theory would be able to show anything meaningfully more complete than what CQM already defined.
IMO nothing so far has, and if this binary proof was complete, it would be showing a level of completeness beyond that capable of CQM. Although there are options for interpretations that might be more complete (WMI, BM TI etc.) I’ve seen nothing that indicate that any of those or this binary approach as conclusively correct and that the CQM is wrong in claiming ‘completeness’.

Also the scientific community IMO in general does not take both Einstein and Bohr as definitively wrong or we would not see grants and experiments still moving forward to close “loopholes” in the EPR Bell Aspect type experiments. (Kwait Group at Illinois worked on one of these last year but I’m not aware of any results.)

If this boils down to different opinions of how to interpret this approach and what assumptions can be made without detailed proof - then let’s just keep our opinions, I don’t think there is enough remaining in this for either of us to change our opinion to merit debate.
 
  • #42
Originally Posted by ThomasT:
... is it that there is a statistical dependence between coincidental detections and associated angular differences between polarizers (that's the only way that I've seen the correlations mapped)?
vanesch said:
Yes. Of course. Event per event.
So these are the only correlations that we're talking about: how the rate of coincidental detection varies as you vary the angular difference between polarizer settings. (There's no correlation between events at A and events at B? That is, the detection attribute at one end isn't necessarily affected by the setting at the other, and the setting at one end isn't necessarily affected by the detection attribute at the other, and so on.)

A perfect correlation between coincidental detection and angular difference would be described by a linear function, wouldn't it? The surprise is that the observed correlation function isn't linear, but rather just what one would expect if one were analyzing the same optical properties at both ends for any given pair of detection attributes. (And it would seem to follow that if paired attributes have the same properties, then the emissions associated with them were spawned by a common event --eg. their interaction, or by tweaking each member of a pair in the same way, or, as in the Aspect et al experiments, their creation via an atomic transition.)

Originally posted by ThomasT:
The spatially separated data streams aren't paired randomly, are they? That is, they're determined by, eg., coincidence circuitry; so that a detection at one end severely limits the sample space at the other end.
vanesch said:
This is true in *experiments*. That is because experiments can only approximate "ideal EPR situations", and the most severe problem is the low efficiency of light detectors, as well as with the sources of entangled photons.

But we are not talking here about experiments (which are supposed to confirm/falsify quantum predictions), we are talking here about purely formal things. We can imagine sending out, at well-known times, a single pair of entangled systems, whether these be electrons, neutrons, photons, baseballs or whatever. This is theoretically possible within the frame of quantum theory. Whether there exists an experimental technique to realize this in the lab is a different matter of course, but it is in principle possible to have such states. We analyse the statistical predictions of quantum mechanics for these states, and see that they don't comply to what one would expect under the Bell conditions.
We compare the different formulations to each other and we compare them to the results of actual experiments. Quantum theory more closely approximates experimental results. We're trying to ascertain why.

As I had written in a previous post:
There seem to be at least two assumptions made by the quantum model builders. (1) The paired detection attributes had a common (emission) cause; ie., paired detection attributes are associated with filtration events associated with, eg. optical, disturbances that emerged from, eg., the same atomic transition. (2) The filters are analyzing/filtering the same property or properties of the commonly caused incident disturbances -- the precise physical nature of these incident disturbances and their properties being necessarily unknown (ie., unknowable).
vanesch said:
Well, not really.
What is not really? Don't you think these assumptions are part (vis a vis classical optics) of the quantum mechanical approach?
vanesch said:
You don't NEED to assume a common cause, but it would the a priori the most practical way to EXPLAIN perfect anti-correlations in the case of identical settings.
I didn't say you NEED to assume a common cause, just that that assumption was part of the development of the quantum mechanical treatment. Quantum theory says that if the commonly caused optical disturbances are emitted in opposite directions then they can be quantitatively linked via the conservation of angular momentum.
vanesch said:
It would be even more puzzling if INDEPENDENT random events at each side would give perfect anti-correlations, wouldn't it ?

This is what some people in this thread seem to forget: we start from the premise that we find PERFECT CORRELATIONS in the case of identical settings on both side. These perfect correlations are already a prediction of quantum mechanics. This is experimentally verified BTW. But the question is: are these correlations a problem in itself ? STARTING from these perfect correlations, would you take as an assumption that the observed things are independent, or have somehow a common origin ? If you start by saying they are independent, you ALREADY have a big problem: how come they are perfectly correlated ?
So you can "delay" the surprise by saying that the perfect correlation is the result of a common cause (the Lambda in the explanation). Of course if there is a common cause, it is possible to have perfect anti-correlations. If there is no common cause, we are ALREADY in troubles, no ?

But if we now analyze what it means, that the outcomes are determined by a common cause, well then the surprise hits us a bit later, because it implies relations between the OTHER correlations (when Alice and Bob don't make the same measurement).
The relations are between the rate of coincidental detection and the angular difference of the settings of the polarizers, aren't they? What is so surprising about this relationship when viewed from the perspective of classical optics? That is, the angular dependency is just what one would expect if A and B are analyzing essentially the same thing with regard to a given pairing. And, isn't it only logical to assume that that sameness was produced at emission because the opposite moving optical disturbances were emitted by the same atom?
 
Last edited:
  • #43
RandallB said:
As soon as you said OK here you are acknowledging the possibility of D(1,2,3,4) rather than D(1,2,3) meaning four independent variables 16 possibilities not the 8 possibilities from only 3 independent variables.
I understand you only find 8 IF you use D(1,2,3); but to justify saying you “can only find 8” you must eliminate D(1,2,3,4) as the possible minimum solution. Nothing I’ve seen does that.

No, not at all, the 8 doesn't come from the fact that there are 3 "slots" in the D-function! It comes from the fact that there are only 8 different cases of Alice pushing A,B or C and getting green or red, together with the assumption of perfect correlation which means that Bob needs to get the same result if he pushes the same button.

Try to re-read the proof carefully. I think you totally misunderstood the mathematical reasoning.
 
  • #44
Ian Davis said:
The classic case in point is Euclids postulate about parallel lines never meeting. People spent thousands of years trying to prove this conjecture from his other axioms, without any success whatsoever, and it was I think Riemann who found why it couldn't be done. It was his discovery which resulted in the notion of Euclidean space - a space people had played with for thousands of years -- but more importantly all those other types of spaces which none in those thousands of years had ever considered relevant to the equation at hand, or thought to use as vehicles for proving Euclids postulates as initially expressed false.

I would say the opposite: the fact that people saw that they couldn't derive the 5th postulate from the others, even though they thought they could (and even occasionally they thought they DID it, but where then quickly forced into seeing that they made a hidden assumption), means that in formal reasoning on a few pages, hidden assumptions don't survive.
 
  • #45
Ian Davis said:
Of course the story being fiction, it was a small step from there to have this subterfuge result in real tangable advances in the till then understand principles of gravity.

And then the opposite has been so long the case too. Astrology is an example.

In as much as *nature* can surprise us sometimes, and *falsify* theories about how nature works, I think you have a much greater difficulty in giving an impressive list of FORMAL arguments which showed to be wrong.

Bell's theorem doesn't say anything about *nature*. It tells something about a *theory*, which is quantum mechanics: that this theory makes predictions which cannot be obtained by *another* hypothetical theory which satisfies certain conditions.
 
Last edited:
  • #46
Ian Davis said:
It seems a case where we are willing to defy occams razor in order to defend apriori beliefs. How much energy is packed into that one pulse of light, and how is this energy to be conserved when that one pulse visually becomes three. From where does that tail of the incoming signal derive the strength to form such a strong back signal.

What makes me wary with the given "explanations" is that apparently, everything still fits within KNOWN THEORY. Otherwise these guys would be on a Nobel: if they could say: hey, quantum optics predicts THIS, and we find experimentally THAT, and we've verified several aspects, and it is not a trivial error, we simply find SOMETHING ELSE than what known theory predicts, now that would be very interesting: we have, after about 80 years, falsified quantum theory, or at least, quantum optics! But that's NOT what they say ; they say: things behave AS THEORY PREDICTS, only, the stuff is traveling at 2c backwards. Well, that's nonsense, because current theory doesn't allow for that. So if their results DO follow current theory, then their interpretation for sure is wrong - or at best, it is only a possible interpretation amongst others.

Likewise I find Feynmans suggestion that spontaneous construction and destruction of positron electron pairs is our world view is in reality electrons changing direction in time as consequence of absorbing/emitting a photon both intriguing and rather appealing.

Sure, but then, in modern quantum field theory, there is a mathematical construction which you can interpret BOTH WAYS: or as an electron that changes time direction, or as the creation or annihilation of an electron-positron pair.

It does seem that our reluctance to have things move other than forwards in time means that we must jump through hoops to explain why despite appearances things like light, electrons and signals cannot move backwards in time.

This is true, but current theory allows for a view, in all circumstances, where you can see all particles go forward in time. The "price to pay" is that you have to accept pair creation and annihilation. But in fact, this view is a bit more universal than the "backwards in time" view, in that in as much as fermions (electrons, quarks...) CAN be seen as going backward and forward in time and as such explain "creation" and "destruction", with bosons (photons, gluons), this doesn't work out anymore. They DO suffer creation and destruction in any case. So what we thought we could win by allowing "back in time" with fermions (namely, the "explanation" for pair creation and annihilation) screws up as an explanation in any case with bosons. Which makes the "back in time" view an unnecessary view.

Again, modern QFT can entirely be looked upon as "going forward in time". As such, if people come up with an experiment that is "conform to modern theory", but "clearly shows that something is traveling backwards in time", then they are making an elementary error of logic.
 
  • #47
ThomasT said:
A perfect correlation between coincidental detection and angular difference would be described by a linear function, wouldn't it? The surprise is that the observed correlation function isn't linear, but rather just what one would expect if one were analyzing the same optical properties at both ends for any given pair of detection attributes.

I have no idea why you think that the correlation function should be linear ?

(And it would seem to follow that if paired attributes have the same properties, then the emissions associated with them were spawned by a common event --eg. their interaction, or by tweaking each member of a pair in the same way, or, as in the Aspect et al experiments, their creation via an atomic transition.)

But that's exactly what Bell's theorem analyses! Is it possible that the perfect correlations on one hand (the C(A,A) = C(B,B) = C(C,C) = 1) and the observed "crossed correlations" (C(A,B), C(B,C) and C(A,C) ) can be the result of a common origin ?

It turns out that the answer is no, if the C are those predicted by quantum mechanics for an entangled pair of particles, and we pick the right angles of the analysers.

Originally posted by ThomasT:
The spatially separated data streams aren't paired randomly, are they? That is, they're determined by, eg., coincidence circuitry; so that a detection at one end severely limits the sample space at the other end.

We compare the different formulations to each other and we compare them to the results of actual experiments. Quantum theory more closely approximates experimental results. We're trying to ascertain why.

As I had written in a previous post:
There seem to be at least two assumptions made by the quantum model builders. (1) The paired detection attributes had a common (emission) cause; ie., paired detection attributes are associated with filtration events associated with, eg. optical, disturbances that emerged from, eg., the same atomic transition. (2) The filters are analyzing/filtering the same property or properties of the commonly caused incident disturbances -- the precise physical nature of these incident disturbances and their properties being necessarily unknown (ie., unknowable).

What is not really? Don't you think these assumptions are part (vis a vis classical optics) of the quantum mechanical approach?

I didn't say you NEED to assume a common cause, just that that assumption was part of the development of the quantum mechanical treatment. Quantum theory says that if the commonly caused optical disturbances are emitted in opposite directions then they can be quantitatively linked via the conservation of angular momentum.

The experimental optical implementation, with approximate sources and detectors, is only a very approximative approach to the very simple quantum-mechanical question of what happens to entangled pairs of particles, which are in the quantum state:

|up>|down> - |down> |up>

In the optical experiment, we are confronted with the fact that our source of entangled photons is emitting randomly,in time and in different directions, of which we can only capture a small amount of them, and of which we don't have a priori timing information. But that is not a limitation of principle, it is a limitation of practicality in the lab.

So we use time coincidence as a way to ascertain that we have a high probability to deal with "two parts of an entangled pair". We also have the limited detection efficiency of the photon detectors, which make that we don't trigger each time the detector when they receive an entangled pair. But we can have a certain sample of pairs of which we are pretty sure that they ARE from entangled pairs, as they show perfect (anti-) correlation, which would be unexplainable if they were of different origin.

It would be simpler, and it is in principle entirely possible, to SEND OUT entangled pairs of particles ON COMMAND, but we simply don't know how to make such an efficient source.
It would also be simpler if we had 100% efficient particle detectors. In that case, our experimental setup would ressemble more the "black box" machine of Bell.

The relations are between the rate of coincidental detection and the angular difference of the settings of the polarizers, aren't they? What is so surprising about this relationship when viewed from the perspective of classical optics?

Well, how do you explain perfect anti-correlation purely on the grounds of classical optics ? If you have, say, an incident pulse on both sides with identical polarisation, which happens to be 45 degrees wrt to the (identical) polariser axes at Bob and Alice, which make normally Alice have 50% chance to see "up" and 50% chance to see down, and Bob too, how come that they find each time the SAME outcome ? That is, how come that when Alice sees "up" (remember, with 50% chance), that Bob ALSO sees "up" and if Alice sees "down" that Bob also sees down ? You'd expect that you would have a total lack of correlation in this case, no ?

Now, of course, the source won't always send out pairs which are 45 degrees away from Alice's and Bob's axis, but sometimes it will. So how come that we find perfect correlation ?

That is, the angular dependency is just what one would expect if A and B are analyzing essentially the same thing with regard to a given pairing. And, isn't it only logical to assume that that sameness was produced at emission because the opposite moving optical disturbances were emitted by the same atom?

No, this is exactly what Bell analysed ! This would of course have been the straightforward explanation, but it doesn't work.
 
  • #48
vanesch said:
But in fact, this view is a bit more universal than the "backwards in time" view, in that in as much as fermions (electrons, quarks...) CAN be seen as going backward and forward in time and as such explain "creation" and "destruction", with bosons (photons, gluons), this doesn't work out anymore. They DO suffer creation and destruction in any case. So what we thought we could win by allowing "back in time" with fermions (namely, the "explanation" for pair creation and annihilation) screws up as an explanation in any case with bosons. Which makes the "back in time" view an unnecessary view.

I am not sure if I understand what you mean by emphasising that creation and destruction of bosons, being problematic in the context of time reversal. I'd understand bosons to be absorbed and emitted (converted to/from energy absorbed emitted by the fermion of the correct type), but I don't see how this understanding in any sense "screws up as an explanation in any case with bosons". Fermions and bosons seem such different things that one might equally well use the argument that boson's liking to be in the same state, screws up the idea that fermions hate to be in the same state. Trying to understand the behaviour of fermions as if they were bosons or visa versa seems to be comparing apples and oranges. Or is there some deep connection between bosons and fermions which relates notion of change in charge in electrons with emission/absorption photons, which is violated if positrons are imagined to be electrons moving backwards in time.

I had thought that photons were particularly good candidates to consider as being time reversed because under time reversal none of their fundamental characteristics change, so logically we can't even suggest which way in time a photon is travelling. I've no idea what gluons would manifest themselves as under time reversal, but at a guess they'd provide the nuclear forces associated with anti-protons, etc. Time reversed gravitons at a guess would be good candidates to explain the force of dark energy because their exchange viewed from our perspective would be pulling things apart, while from the perspective of backwards time would have them behaving just like gravitons in creating an attraction between mass. All very lay reasoning, with no math to back it, but I've not encountered the notion that it is more unreasonable to imagine bosons moving backwards in time than fermions so wish to know more, the better to improve my understanding of what can be and what can't be.
 
Last edited:
  • #49
vanesch said:
No, not at all, the 8 doesn't come from the fact that there are 3 "slots" in the D-function! It comes from the fact that there are only 8 different cases of Alice pushing A,B or C and getting green or red, together with the assumption of perfect correlation which means that Bob needs to get the same result if he pushes the same button.

Try to re-read the proof carefully. I think you totally misunderstood the mathematical reasoning.

Sorry, Now you have me totally confused and rereading your proof and prior posts are of no help. Given this current statement I am at a loss to understand what the propose of the D-function was in your prior posts.
Nor can I figure what kind of math you used to take three options by Alice times three options by Bob to get 9 different cases? My off the cuff gorilla math gives 3 x 3 = 9.

Perhaps I missed the point of the 3 functions for Alice must be identical to the 3 functions used by Bob. That would unfairly eliminate the ability of Alice or Bob to randomly select from say a population of 180 different functions which ones they use for their private options of A, B, & C.

Sure it may be beneficial to reduce the options for calculation purposes. But if the options are contrived so as to eliminate any independence in selection between Alice and Bob as would exist in reality, the results obtained by such contrived options would need to be doubled to account for blocking that independence from the calculation in order to represent a minimum possible result, if that degree of independence were to be allowed in as should be expected in reality.

If you willing to do a bit of rereading yourself; Notice in my example as given in post #9 that only the first of nine outcomes require that both Alice and Bob only see RED if the other does too.
The kind of "toy examples" you describe where (AA) (BB) (CC) always has Alice and Bob seeing Red together requires a form of superdeterminism you should be avoiding! I consider demanding the A, B & C be exactly the same A, B, C functions used by the other observer an unrealistic elimination of a real indeterminate variability that cannot be ignored. I.E. a "toy example" that cannot be expected to be equivalent to real life.

As I said before, I don’t see how my doubts on this point can be changed.
And if after due reflection you are still have no doubt at all in the assumptions used and conclusions made in this method then you won’t be changing your mind either.
That simple leaves us with differing opinions about if there is any scientific doubt remaining in this approach and its claims.
I think there is doubt and you do not, that is OK by me.

We both gave it our best shot, no need for either of us to struggle on in a pointless debate that has no hope of changing the others opinion. So I will leave it at that.
 
  • #50
RandallB said:
The kind of "toy examples" you describe where (AA) (BB) (CC) always has Alice and Bob seeing Red together requires a form of superdeterminism you should be avoiding! I consider demanding the A, B & C be exactly the same A, B, C functions used by the other observer an unrealistic elimination of a real indeterminate variability that cannot be ignored. I.E. a "toy example" that cannot be expected to be equivalent to real life.
Aren't A, B & C supposed to stand for different possible angles the experimenters can choose to measure the spins of their respective particles? Do you agree that QM predicts that when you have two particles with entangled spins, if we measure each particle's spin on the same axis, they should always have opposite spins with 100% probability?
 
  • #51
Well sure what’s that got to do with not allowing the opportunity for Alice and Bob to select the angle for A, B, & C independently?
Are you saying A, B, & C must all be the same (0 or 90 degree shifts) from the three functions used by the other observer?
Don’t you think that eliminates an important element of independence between Alice and Bob to predetermine the types of tests they are allowed to use down to a set of three identical functions?

From such a starting point it seems more like a gimmick designed to produce (I’m sure not intentionally) an expected or wanted result than a rational evaluation of all the independent variables possible in the problem. I really don’t see where it is any better than the Van Newmwn proof.
 
  • #52
RandallB said:
Well sure what’s that got to do with not allowing the opportunity for Alice and Bob to select the angle for A, B, & C independently?
They do select them independently. Say on each trial Alice has a 1/3 chance of choosing A, a 1/3 chance of choosing B, and a 1/3 chance of choosing C, and the same goes for Bob. Then on some trials they will make different choices, like Alice-B and Bob-C. But on other trials they will happen to make the same choice, like both choosing B. What I'm saying is that if we look at the subset of trials where they both happen to choose the same angle, they are 100% guaranteed to get opposite spins (or guaranteed to get the same color lighting up in vanesch's example--either way the correlation is perfect). Do you agree?
RandallB said:
Are you saying A, B, & C must all be the same (0 or 90 degree shifts) from the three functions used by the other observer?
I don't understand what you're asking here. A, B & C are three distinct angles, like 0, 60, 90 or something. When you say they "must all be the same", are you talking about the assumption that each of the two particles must have a predetermined response for how it will behave if it's measured on any of the three angles? If so, this is just something we must assume if we want to believe in local realism and still explain how the particles' responses are always perfectly correlated when the experimenters happen to pick the same angle.
RandallB said:
Don’t you think that eliminates an important element of independence between Alice and Bob to predetermine the types of tests they are allowed to use down to a set of three identical functions?
I'm not sure what you're asking here either, maybe if you clarify what you meant in the previous part it'll become more clear to me.
 
Last edited:
  • #53
Ian Davis said:
I had thought that photons were particularly good candidates to consider as being time reversed because under time reversal none of their fundamental characteristics change, so logically we can't even suggest which way in time a photon is travelling. I've no idea what gluons would manifest themselves as under time reversal, but at a guess they'd provide the nuclear forces associated with anti-protons, etc. Time reversed gravitons at a guess would be good candidates to explain the force of dark energy because their exchange viewed from our perspective would be pulling things apart, while from the perspective of backwards time would have them behaving just like gravitons in creating an attraction between mass. All very lay reasoning, with no math to back it, but I've not encountered the notion that it is more unreasonable to imagine bosons moving backwards in time than fermions so wish to know more, the better to improve my understanding of what can be and what can't be.
All the known fundamental laws of physics are already either time-symmetric (invariant under time-reversal) or CPT-symmetric (invariant under a combination of time reversal, matter/antimatter charge reversal, and parity inversion). For a time-symmetric set of laws, what this means is that if you take a movie of a system obeying those laws and play it backwards, there will be no way for another physicist to know for sure that you are playing the movie backwards rather than forwards, since the system's behavior in the backwards movie is still obeying exactly the same laws (though the backwards movie may appear statistically unlikely if it shows entropy decreasing in an isolated system). This is true of gravitation, which is perfectly time-symmetric--a backwards movie of a gravitating system will not involve the appearance of any kind of "antigravity", despite what you might think. I discussed this in post #68 here:
Actually, gravity is time-symmetric, meaning the laws of gravity are unchanged under a time-reversal transformation--in physical terms, this means that if you look at a film of objects moving under the influence of gravity, there's no way (aside from changes in entropy) to determine if you're watching the film being played forwards or if it's being played backwards. The reason it seems asymmetric is because of entropy, like how a falling object will smack the ground and dissipate most of its kinetic energy as sound and heat--if a falling object had a perfectly elastic collision with the ground so that no kinetic energy was dissipated in this way, each time it hit the ground it would bounce back up to an equal height as before, so this would look the same forwards as backwards (and the reversed version of the collision where kinetic energy is dissipated is not ruled out by the laws of physics, it's just statistically unlikely that waves of sound and the random jostling of molecules due to heat would converge to give a sudden push to an object that had been previously been resting on the ground...if it did happen, though, it would look just like a reversed movie of an object falling to the ground and ending up resting there). Likewise, any situation where no collisions are involved, like orbits, will still be consistent with the laws of gravity when viewed in reverse.
The idea behind CPT-symmetry is basically similar--if you take a movie of a system obeying CPT-symmetric laws, then play it backwards and take the mirror image so that the +x direction is now labeled -x, the +y now labeled -y and the +z now labeled -z (parity inversion) and you reverse the labels of particles and antiparticles (so that electrons in the original movie are now labeled as positrons in the reversed movie, and vice versa), then the new altered movie will still appear to be obeying the exact same laws as in the unaltered version.
 
  • #54
Ian Davis said:
I am not sure if I understand what you mean by emphasising that creation and destruction of bosons, being problematic in the context of time reversal. I'd understand bosons to be absorbed and emitted (converted to/from energy absorbed emitted by the fermion of the correct type), but I don't see how this understanding in any sense "screws up as an explanation in any case with bosons".

No, that's not what I wanted to say. I wanted to say that with fermions, one might "hate the idea" to have creation and annihilation, and then one can find an "explanation" for it, which is that fermions sometimes travel back in time. As such, one can then eliminate the need to consider "creation" and "annihilation".
But even considering "traveling back in time", one cannot eliminate the need to consider "creation" and "annihilation" of bosons. So if you ANYHOW have to consider creation and annihilation (which you wanted to avoid and adopted the "back in time" explanation for it) for bosons, then you can also accept it for fermions and any NEED to consider back in time propagation vanishes, as its explanatory power (its possibility of doing away with creation and annihilation) was in any case not working for bosons.

In other words, the assumption that particles go back in time is never needed, as it doesn't explain anything. And we can explain everything in QFT with particles going forward in time, and considering creation/annihilation.

I had thought that photons were particularly good candidates to consider as being time reversed because under time reversal none of their fundamental characteristics change, so logically we can't even suggest which way in time a photon is travelling.

This is correct, so we can just as well take it that it goes forward, no ? It will not be possible to DEMONSTRATE that it goes backward in time, and that it CAN'T be seen as traveling forward in time. And this brings us back to the original article: something that complies with actual theory can never PROVE that it went back in time !
 
  • #55
RandallB said:
Sorry, Now you have me totally confused and rereading your proof and prior posts are of no help. Given this current statement I am at a loss to understand what the propose of the D-function was in your prior posts.

The whole idea of Bell's proof is that whether the red or the green light lights up at the Alice box is given by a probability that is determined by the "local inputs", which are two-fold: an input that comes from the "central box", and the button that Alice pushes.

That is, GIVEN these inputs, so given the message from the central box, and the choice of Alice, this gives us a probability for there to be "red" as a result (and hence, the complementary probability to have "green" of course).

Now, this can be a genuine probability, like, say, 0.6, or it can be a certainty, which comes down to the probability to be 0 (green for sure) or 1 (red for sure). We leave this open.

So GIVEN the message from the central box (lambda1 if you want), and GIVEN the choice by Alice (X, which is A, B or C), we have a function, which is P(X,lambda1), and gives us that famous probability.

We can hold the same reasoning at Bob's, where the function will be Q(Y,lambda2).

Now, D is the expectation value of the correlation function of Alice's and Bob's outcomes, when they have picked respectively X and Y, and when the message lambda1 was sent to alice, and the message lambda2 was sent to bob.

D is nothing else but the probability to have (red,red) times +1 plus the probability to have (green,green) times +1 plus the probability to have (red,green) times -1 plus the probability to have (green,red) times -1, under the assumption that Alice pushed X, that Bob pushed Y, that lambda1 was sent to Alice, and under the assumption tht lambda2 was sent to Bob.

As we assume that the "drawing" is done locally (all "common information" is already taken care off by the message lambda1 and lambda2, so we only look at the REMAINING uncertainties), we can assume that the probability to have, say, (red,red) is given by:

P(X,lambda1) x Q(Y,lambda2).

The probability to have, red-green is given by:
P(X,lambda1) x (1 - Q(Y,lambda2) )

etc...

And from this, we can calculate the above D function (the expectation over the remaining probabilities, given X, Y, lambda1 and lambda2) and we find:

D(X,Y, lambda1, lambda2) = ( 1 - 2 x P(X,lambda1) ) x (1 - 2 x Q(Y, lambda2))

Now there is a triviality, which seems to be confusing you, which I applied:
we can call a new mathematical structure: lambda = { lambda1, lambda2 }. If lambda1 is a real number, and lambda2 is a real number, then lambda can be seen as a 2-dim vector. If lambda1 was a text file, and lambda2 is a text file, then lambda can be seen as the concatenation of the two text files. It is just NOTATION.

Now, if in all generality, you have a function f(x), you can ALWAYS define a function g(x,y) which is equal to f(x) for all values of y, of course.
So if P(X,lambda1) is a function of lambda1, you can ADD lambda2 as an argument, which doesn't do anything: P'(X,lambda1,lambda2) = P(X,lambda1).
Same for Q, we can define Q'(Y,lambda1,lambda2) = Q(Y, lambda2).

But we have the "vector" notation lambda which stands for {lambda1, lambda2}, so we can write P'(X,lambda) and Q'(Y,lambda). They just have a "useless" argument more, but they are the same function, just as g(x,y) is in fact just f(x), and y doesn't play a role. But if this confuses you, I will continue to write lambda1, lambda2.

So we can write:
D(X,Y, lambda1, lambda2) = ( 1 - 2 x P'(X,lambda1,lambda2) ) x (1 - 2 x Q'(Y, lambda1,lambda2))

And we can drop the ', and call P', simply P, and Q' simply Q.

So we can write:
D(X,Y, lambda1, lambda2) = ( 1 - 2 x P(X,lambda1,lambda2) ) x (1 - 2 x Q(Y, lambda1,lambda2))

Ok, so D was the expectation value of the correlation, GIVEN the choice of Alice and Bob, and GIVEN the (hidden) messages sent from the central box.

It is important to note that D is always a real number between -1 and +1. This comes from the fact that P and Q are probabilities, and hence between 0 and 1.

Now, we assume that those messages themselves are randomly sent out with a given probability distribution. That means, there's a certain probability Pc(lambda1,lambda2) to send out a specific couple of messages, namely {lambda1,lambda2}.

Given that Alice and Bob can't see that message, THEIR correlation function (for a given choice X and Y) will be the expectation value of D over this probability distribution of the couples (lambda1, lambda2), right ? Bob and Alice will "average" their correlation function over the messages.

So how does this work out ? Well, you have to sum of course each value of D(X,Y,lambda1,lambda2) multiplied with the probability that the messages sent out will be {lambda1,lambda2}. THIS will give you the correlation function that Bob and Alice will find when they picked X and Y, in other words, C(X,Y).

So we have that:

<br /> C(X,Y) = \sum_{(lambda_1,lambda_2)} D(X,Y,lambda1,lambda2) Pc(lambda1,lambda2)<br />

This "sum" can be an integral over whatever is the set of the couples (lambda1,lambda2). It can be a huge set. In the case of text files, we have to sum over all thinkable couples of textfiles (but some might have probability Pc=0 of course). In the case of real numbers, we have to integrate over the plane. It doesn't matter.

The above expression is valid for the 9 different C(X,Y) values: for C(A,A), for C(A,B),...

But we KNOW certain C values: C(A,A) = 1 for instance. Does C(A,A) = 1 impose a condition on D or on Pc ?

Yes, it does. This is the whole point. Let us write out the above expression for the case C(A,A):

<br /> C(A,A) = 1 = \sum_{(lambda_1,lambda_2)} D(A,A,lambda1,lambda2) Pc(lambda1,lambda2)<br />

Now,
<br /> \sum_{(lambda_1,lambda_2)} Pc(lambda1,lambda2) = 1 <br />

because it is a probability distribution, all Pc values are between 0 and 1, and D(A,A,lambda1,lambda2) is a number between -1 and 1. Such a sum can only be equal to 1 if ALL D(A,A,lambda1,lambda2) values are equal to 1 (at least, for those lambda1 and lambda2 for which Pc is not equal to 0).

So we know that D(A,A,lambda1, lambda2) = 1 for all lambda1, and all lambda2.

But we also know that D(A,A,lambda1,lambda2) = ( 1 - 2 x P(A,lambda1,lambda2) ) x (1 - 2 x Q(A, lambda1,lambda2))

So we have that:
( 1 - 2 x P(A,lambda1,lambda2) ) x (1 - 2 x Q(A, lambda1,lambda2)) = 1 for all lambda1, and lambda2.

Well, (1 - 2 x) (1 - 2 y), with x and y between 0 and 1, can only be equal to 1 in two different cases:

x = y = 1 OR

x = y = 0.

This means that for each couple (lambda1, lambda2) we have only 2 possibilities:

OR
P(A,lambda1,lambda2) = Q(A,lambda1,lambda2) = 1

OR
P(A,lambda1,lambda2) = Q(A,lambda1,lambda2) = 0

Of course, if you take a random lambda1 and lambda2, it can be, say 1, and if you take another lambda1 and lambda2, it can be 0, but it is in each case one of both.

So this means we can split the whole set of (lambda1,lambda2) couples into two parts:
those couples that give P(A,lambda1,lambda2) = Q(A,lambda1,lambda2) = 1 and then the other couples, which necessarily give: P(A,lambda1,lambda2) = Q(A,lambda1,lambda2) = 0.

Concerning P(A,lambda1,lambda2), we hence don't need to know precisely what are lambda1, and lambda2 (text files, numbers,...), but just whether they fall in the first part, or in the second, because in the first part, P(A,lambda1,lambda2) will be equal to 1, and in the second part, it will be 0. In ANY case, P(A,lambda1,lambda2) = Q(A,lambda1,lambda2).

So if we know in which of the part the couple (lambda1,lambda2) falls, we know enough about it to know the value of P(A,lambda1,lambda2) and Q(A,lambda1,lambda2). It is either 1 or 0. So the split of the set of couples (lambda1,lambda2) comes about because of the fact that we deduced that in any case, P(A,lambda1,lambda2) = Q(A,lambda1,lambda2) can only take up 2 possible values.

Now, we apply the same reasoning to C(B,B) = 1 and then to C(C,C) = 1, and we will now have 3 "partitions" in two of the set of (lambda1,lambda2) couples. The first partition, as we showed, determines the value of P(A,lambda1,lambda2) = Q(A,lambda1,lambda2) = 0 or 1. The second partition will determine the value of P(B,lambda1,lambda2) = Q(B,lambda1,lambda2) = 0. And the last one will do so for P(C,lambda1,lambda2) = Q(C,lambda1,lambda2) = 0

Now, if you apply 3 different partitions in 2 parts to any set, you will end up with at most 8 pieces. So our entire set of couples (lambda1,lambda2) is now cut in 8 pieces, and if we know in which piece a couple falls, we know what will be the results for the 6 functions:
P(A,lambda1,lambda2), P(B,lambda1,lambda2), P(C,lambda1,lambda2), Q(A,lambda1,lambda2), Q(B,lambda1,lambda2), Q(C,lambda1,lambda2).

Each of these functions is constant over each of the 8 different pieces of the set of (lambda1,lambda2) couples (either it is 1 or it is 0).

Now, if we know these 6 values, we know also the 9 values of
D(A,A,lambda1,lambda2), D(A,B,lambda1,lambda2), D(A,C,lambda1,lambda2) ...
D(C,C,lambda1,lambda2).

Each of these functions is CONSTANT over each of the 8 different pieces of our (lambda1,lambda2) set, because they depend on the P and Q functions which are constant. We can call these constant values D(X,Y,firstslice), D(X,Y,secondslice) ...
D(X,Y,8thslice)

Now, pick one of these, say, D(A,B,lambda1,lambda2). This function can only take on at most 8 different values, because we have only 8 different possibilities for P(A,lambda1,lambda2) and Q(B,lambda1,lambda2). But in fact it can take on only 4, because our 8 different possibilities included P(C,lambda1,lambda2) and this value doesn't enter into the calculation of D(A,B,lambda1,lambda2), so of our 8 different "slices", they will give 2 by 2 the same result (namely, the two slices that only differ for P(C,lambda1,lambda2) will not change the value of D).

Now, if we go back to
<br /> C(X,Y) = \sum_{(lambda_1,lambda_2)} D(X,Y,lambda1,lambda2) Pc(lambda1,lambda2)<br />

split the sum over the entire set of couples (lambda1,lambda2) over the 8 different slices:

<br /> C(X,Y) = \sum_{(lambda_1,lambda_2) in first slice} D(X,Y,lambda1,lambda2) Pc(lambda1,lambda2) + <br />
\sum_{(lambda_1,lambda_2) in second slice} D(X,Y,lambda1,lambda2) Pc(lambda1,lambda2) + ...<br />
<br /> \sum_{(lambda_1,lambda_2) in 8th slice} D(X,Y,lambda1,lambda2) Pc(lambda1,lambda2)<br />

But within the first slice, D is constant! And within the second slice, too...
So we can bring this outside:

<br /> C(X,Y) = D(X,Y,firstslice) \sum_{(lambda_1,lambda_2) in first slice} Pc(lambda1,lambda2) + <br />
D(X,Y,secondslice) \sum_{(lambda_1,lambda_2) in second slice} Pc(lambda1,lambda2) + ...<br />
D(X,Y,8thslice)\sum_{(lambda_1,lambda_2) in 8th slice} Pc(lambda1,lambda2)<br />

And now the sums that remain, are nothing else but the sum of probabilities of each of the (lambda1,lambda2) couples in the first slice (which we call p1), of each of the (lambda1,lambda2) couples in the second slice (which we call p2), ...

So:
<br /> C(X,Y) = D(X,Y,firstslice) p1 + D(X,Y,secondslice) p2 + ...<br /> D(X,Y,8thslice) p8<br />

But let us look a bit deeper into D(X,Y,firstslice). In the first slice, we have that P(A,lambda1,lambda2) = 1 = Q(A,lambda1,lambda2) AND
P(B,lambda1,lambda2) = 1 = Q(B,lambda1,lambda2) AND
P(C,lambda1,lambda2) = 1 = Q(C,lambda1,lambda2)

So this means that D(X,Y,firstslice) = 1 for all X and Y !

Now in the second slice, we have that:
P(A,lambda1,lambda2) = 1 = Q(A,lambda1,lambda2) AND
P(B,lambda1,lambda2) = 1 = Q(B,lambda1,lambda2) AND
P(C,lambda1,lambda2) = 0 = Q(C,lambda1,lambda2)

So this means that D(A,B,secondslice) = 1, D(A,C,secondslice) = -1, ...

Etc,...

In fact, we will find that those famous constants of are just 1 or -1, and we can calculate them (using D(X,Y) = (1-2P(X)) (1-2Q(Y)) ) in each slice. So there aren't even 4 possibilities for D, but only 2!

Given this, it means that we can calculate each of the 9 functions:
C(X,Y) as sums and differences of p1, p2, p3, ... p8.
But of course, we already know the C(A,A) = C(B,B) = C(C,C) = 1, because we imposed this. If you do the calculation (do it as an exercise!) you will find that each time, they come out to be p1 + p2 + ... + p8 = 1. That is because D(A,A...) = 1 for all of the slices, and D(B,B,...) = 1 for all of the slices and D(C,C,...) = 1 for all of the slices, as we already deduced before.
 
Last edited:
  • #56
Originally Posted by ThomasT
A perfect correlation between coincidental detection and angular difference would be described by a linear function, wouldn't it? The surprise is that the observed correlation function isn't linear, but rather just what one would expect if one were analyzing the same optical properties at both ends for any given pair of detection attributes.

vanesch said:
I have no idea why you think that the correlation function should be linear ?

Where did I say that I think it should be linear? I said that a perfect correlation would be linear. But I wouldn't expect that.

Originally Posted by ThomasT
(And it would seem to follow that if paired attributes have the same properties, then the emissions associated with them were spawned by a common event --eg. their interaction, or by tweaking each member of a pair in the same way, or, as in the Aspect et al experiments, their creation via an atomic transition.)

vanesch said:
But that's exactly what Bell's theorem analyses! Is it possible that the perfect correlations on one hand (the C(A,A) = C(B,B) = C(C,C) = 1) and the observed "crossed correlations" (C(A,B), C(B,C) and C(A,C) ) can be the result of a common origin ?

I don't think that's what Bell's theorem actually analyses, or maybe I just don't understand what you're saying. Anyway, let's continue.

vanesch said:
It turns out that the answer is no, if the C are those predicted by quantum mechanics for an entangled pair of particles, and we pick the right angles of the analysers.

Keep in mind that we're not correlating what happens at A with what happens at B. We're correlating angular difference with coincidental detection.

If you only plot coincidence rates corresponding to 0 and 90 degree angular difference, then connect the dots, then you get a straight line, don't you? What does that tell you? It doesn't tell me much of anything necessarily.

vanesch said:
The experimental optical implementation, with approximate sources and detectors, is only a very approximative approach to the very simple quantum-mechanical question of what happens to entangled pairs of particles, which are in the quantum state:

|up>|down> - |down> |up>

In the optical experiment, we are confronted with the fact that our source of entangled photons is emitting randomly,in time and in different directions, of which we can only capture a small amount of them, and of which we don't have a priori timing information. But that is not a limitation of principle, it is a limitation of practicality in the lab.

So we use time coincidence as a way to ascertain that we have a high probability to deal with "two parts of an entangled pair". We also have the limited detection efficiency of the photon detectors, which make that we don't trigger each time the detector when they receive an entangled pair. But we can have a certain sample of pairs of which we are pretty sure that they ARE from entangled pairs, as they show perfect (anti-) correlation, which would be unexplainable if they were of different origin.

It would be simpler, and it is in principle entirely possible, to SEND OUT entangled pairs of particles ON COMMAND, but we simply don't know how to make such an efficient source.
It would also be simpler if we had 100% efficient particle detectors. In that case, our experimental setup would ressemble more the "black box" machine of Bell.

I take it, although I'm not sure, that you don't agree with:
There seem to be at least two assumptions made by the quantum model builders. (1) The paired detection attributes had a common (emission) cause; ie., paired detection attributes are associated with filtration events associated with, eg. optical, disturbances that emerged from, eg., the same atomic transition. (2) The filters are analyzing/filtering the same property or properties of the commonly caused incident disturbances -- the precise physical nature of these incident disturbances and their properties being necessarily unknown (ie., unknowable).

So, I'll ask you again:
Don't you think these assumptions, (1) and (2) above, are part (vis a vis classical optics) of the quantum mechanical approach?

Originally Posted by ThomasT
The relations are between the rate of coincidental detection and the angular difference of the settings of the polarizers, aren't they? What is so surprising about this relationship when viewed from the perspective of classical optics?

vanesch said:
Well, how do you explain perfect anti-correlation purely on the grounds of classical optics? If you have, say, an incident pulse on both sides with identical polarisation, which happens to be 45 degrees wrt to the (identical) polariser axes at Bob and Alice, which make normally Alice have 50% chance to see "up" and 50% chance to see down, and Bob too, how come that they find each time the SAME outcome ? That is, how come that when Alice sees "up" (remember, with 50% chance), that Bob ALSO sees "up" and if Alice sees "down" that Bob also sees down ? You'd expect that you would have a total lack of correlation in this case, no ?

Now, of course, the source won't always send out pairs which are 45 degrees away from Alice's and Bob's axis, but sometimes it will. So how come that we find perfect correlation ?

I'm not sure what you mean by perfect correlation. There is no perfect correlation between coincidence rate and any one angular difference. That wouldn't mean anything. The rate is always (in the ideal) a certain number associated with a certain angular difference. In ascertaining the correlation between angular dependence and coincidence rate you would want to plot as many rates with respect to different angular differences as you could.

If you know (have produced) the polarization of the incident light, then you can use a classical treatment, can't you? The problem is that we don't know anything about the incident pulses. Quantum theory makes two assumptions: (1) they had a common source, and (2) they are, in effect, the same thing.

Anyway, I was talking about viewing the relationship between angular dependence and coincidence rate from the perspective of classical optics -- not actually calculating the results using classical optics.

Originally Posted by ThomasT
That is, the angular dependency is just what one would expect if A and B are analyzing essentially the same thing with regard to a given pairing. And, isn't it only logical to assume that that sameness was produced at emission because the opposite moving optical disturbances were emitted by the same atom?

vanesch said:
No, this is exactly what Bell analysed ! This would of course have been the straightforward explanation, but it doesn't work.

I think you're wrong about this, because it happens to be exactly what the developers of quantum theory did assume. However, in order to do accurate calculations and develop a consistent mathematical framework for the theory it was necessary to leave out certain details (about polarization for example) that were part of the classical theory, but which led to calculational problems when applied to quantum experimental phenomena. One simply can't say anything about the angle of polarization of the light incident on the polarizers-analyzers.

In place of all the metaphysical stuff of classical physics we have the quantum superposition of states (which doesn't pretend to be anything other than a mathematical contrivance).
 
  • #57
So what exactly is the difference between determinism and superdeterminism?
 
  • #58
ThomasT said:
So what exactly is the difference between determinism and superdeterminism?
I wrote something about this in post #29 of this thread:
From reading the wikipedia article I get the impression that superdeterminism is basically the same as the notion of a "conspiracy" in the initial conditions of the universe, which ensures that the hidden-variables state in which two particles are created will always be correlated with the "choice" of measurements that the experiments decide to make on them. So, for example, in any trial where the experimenters were predetermined to measure the same spin axis, the particles would always be created with opposite spin states on that axis, but in trials where the experimenters were not predetermined to measure the same spin axis, the hidden spin states of the two particles on any given axis would not necessarily be opposite.

Since in a deterministic universe the state of an experimenter's brain which determines his "choice" of what to measure on a given trial can be influenced by a host of factors in his past which have nothing to do with the creation of the particle (what he had for lunch that day, for example), the only way for such correlations to exist would be to pick very special initial conditions of the universe--the correlations would not be explained by the laws of physics alone (unless this constraint on the initial conditions is itself somehow demanded by the laws of physics).
 
  • #59
ThomasT said:
After reading a quote (from a BBC interview) of Bell about superdeterminism, I still don't understand the difference between superdeterminism and determinism. From what Bell said they seem to be essentially the same.

Is it that experimental violations of Bell inequalities show that the spatially separated data streams are statistically dependent?

Or, is it that there is a statistical dependence between coincidental detections and associated angular differences between polarizers (that's the only way that I've seen the correlations mapped)?
My understanding is that the "statistical independence" that is violated in superdeterminism is the independence of each particle's state prior to being measured (including the state of any 'hidden variables' associated with the particle) from each experimenter's choice of what angle to set their detector when measuring the particle. In other words, whatever it is that determines the particle's state, it must act as if it does not "know in advance" how the experimenter is going to choose to measure it. If there is a spacelike separation between the two experimenters' measurements, then if we find that the particles always give opposite results whenever the experimenters both happen to choose the same angle, the only way to explain this in a local realist universe is if both particles had predetermined answers to what result they'd give when measured on that angle, and both were assigned opposite predetermined answers when they were created at a common location. But if nature acts as if it doesn't "know in advance" what angles the experimenters will choose, then the only conclusion for a local realist must be that on every trial the particles are assigned predetermined (opposite) answers for what result they'll give when measured on any possible choice of angle.

Do you disagree with any of this?
 
  • #60
JesseM said:
My understanding is that the "statistical independence" that is violated in superdeterminism is the independence of each particle's state prior to being measured (including the state of any 'hidden variables' associated with the particle) from each experimenter's choice of what angle to set their detector when measuring the particle. In other words, whatever it is that determines the particle's state, it must act as if it does not "know in advance" how the experimenter is going to choose to measure it. If there is a spacelike separation between the two experimenters' measurements, then if we find that the particles always give opposite results whenever the experimenters both happen to choose the same angle, the only way to explain this in a local realist universe is if both particles had predetermined answers to what result they'd give when measured on that angle, and both were assigned opposite predetermined answers when they were created at a common location. But if nature acts as if it doesn't "know in advance" what angles the experimenters will choose, then the only conclusion for a local realist must be that on every trial the particles are assigned predetermined (opposite) answers for what result they'll give when measured on any possible choice of angle.

Do you disagree with any of this?

No, I don't disagree. But I still wouldn't be able to answer the question: what is the definition of superdeterminism. So, I don't agree either. :smile:

Thanks for the effort. I was looking for something a bit shorter. Is there a clear, straightforward definition for the term or isn't there?
 

Similar threads

  • · Replies 50 ·
2
Replies
50
Views
7K
  • · Replies 75 ·
3
Replies
75
Views
11K
  • · Replies 66 ·
3
Replies
66
Views
7K
  • · Replies 87 ·
3
Replies
87
Views
8K
  • · Replies 82 ·
3
Replies
82
Views
10K
  • · Replies 197 ·
7
Replies
197
Views
32K
  • · Replies 40 ·
2
Replies
40
Views
2K
Replies
35
Views
796
  • · Replies 11 ·
Replies
11
Views
5K
  • · Replies 190 ·
7
Replies
190
Views
15K