Questions about Bell: Answering Philosophical Difficulties

  • Thread starter Thread starter krimianl99
  • Start date Start date
  • Tags Tags
    Bell
Click For Summary
Bell's theorem challenges the assumptions of locality, superdeterminism, and objective reality in light of quantum mechanics, revealing contradictions with experimental results. The discussion emphasizes that proof by negation is problematic, as it relies on identifying all non-trivial assumptions, which is often impossible. Non-locality poses significant challenges for relativity, as any exception could undermine its foundational principles. The conversation also highlights the complexities of superdeterminism, suggesting it complicates statistical reasoning in scientific inquiry. Ultimately, the implications of Bell's findings raise profound questions about the nature of reality and the limits of scientific reasoning.
  • #61
ThomasT said:
No, I don't disagree. But I still wouldn't be able to answer the question: what is the definition of superdeterminism. So, I don't agree either. :smile:

Thanks for the effort. I was looking for something a bit shorter. Is there a clear, straightforward definition for the term or isn't there?
To summarize what I was saying in that paragraph, how about defining superdeterminism as something like "a lack of statistical independence between variables associated with the particle prior to measurement and the experimenters' choice of what detector setting to use when making the measurement"?
 
Physics news on Phys.org
  • #62
JesseM said:
I wrote something about this in post #29 of this thread:
Thanks, that thread was most helpful. My take on it is that the ideas of superdeterminism and determinism, for the purpose of ascertaining the meaning Bell's theorem, are essentially synonymous, and, more importantly, unnecessary.
 
  • #63
JesseM said:
I wrote something about this in post #29 of this thread:

JesseM said:
To summarize what I was saying in that paragraph, how about defining superdeterminism as something like "a lack of statistical independence between variables associated with the particle prior to measurement and the experimenters' choice of what detector setting to use when making the measurement"?
Put it in general form.
 
  • #64
ThomasT said:
Put it in general form.
What do you mean by "general form"? If you mean a form that doesn't specifically discuss detector settings of experimenters, I don't think that's possible, the central point of what is meant by the term "superdeterminism" seems to be that experimenters can't treat their choices of measurements as random, that nature can "anticipate" what choice they will make and alter the prior states of the system being measured accordingly.
 
  • #65
JesseM said:
What do you mean by "general form"? If you mean a form that doesn't specifically discuss detector settings of experimenters, I don't think that's possible, the central point of what is meant by the term "superdeterminism" seems to be that experimenters can't treat their choices of measurements as random, that nature can "anticipate" what choice they will make and alter the prior states of the system being measured accordingly.
So, superdeterminism is just a special case of determinism involving Bell's theorem and EPR-Bell tests?
 
Last edited:
  • #66
JesseM said:
What do you mean by "general form"? If you mean a form that doesn't specifically discuss detector settings of experimenters, I don't think that's possible, the central point of what is meant by the term "superdeterminism" seems to be that experimenters can't treat their choices of measurements as random, that nature can "anticipate" what choice they will make and alter the prior states of the system being measured accordingly.
Random is defined at the instrumental level, isn't it? That being so, then the polarizer settings are random. But, the coincidence rates aren't random.

I think one can understand (sort of) the observed correlation function, and that there is no need for a nonlocal explanation, simply by assuming a common (emission) cause and that the polarizers are analyzing essentially the same thing.
These are the assumptions that quantum theory makes, and this is as far as it can go in talking about what is happening independent of observation. These assumptions come from the perspective of classical optics, and from these assumptions (and appropriate experimental designs) we would expect to see the observed angular dependency.

So, I don't think I need superdeterminism to avoid nonlocality.
 
  • #67
ThomasT said:
So, superdeterminism is just a special case of determinism involving Bell's theorem and EPR-Bell tests?
I think that's all Bell meant by superdeterminism (see here and here), although different authors might not mean exactly the same thing by that word. Sometimes people talk about superdeterminism as a rejection of "counterfactual definiteness", meaning physics can no longer address questions of what would have happened if a different measurement had been made on the system, but I suppose this is just another way of saying that we cannot assume statistical independence between the choice of measurement on a system and the state of the system prior to measurement. Basically I think this amounts to a limitation on allowable initial conditions for the system and the experimenter, in statistical mechanics terms you can no longer assume that all microstates consistent with a given observed macrostate are physically allowable.
 
  • #68
ThomasT said:
Random is defined at the instrumental level, isn't it? That being so, then the polarizer settings are random. But, the coincidence rates aren't random.
No, the randomness here is about whether there's a correlation between the "hidden states" of particles prior to measurement and the experimenter's choice of what measurement setting to use, over a large number of trials. This is not a question that can be addressed "instrumentally", since by definition we have no way to find out what the hidden states on a given trial actually are. But if we take the perspective of an imaginary omniscient observer who knows the hidden states on each trial, it must be true that the observer either will or won't see a correlation between the complete state of a particle prior to measurement and the experimenter's choice of how to measure it--i.e. the particle either will or won't act as if it can "anticipate" in advance what the experimenter will choose.
ThomasT said:
I think one can understand (sort of) the observed correlation function, and that there is no need for a nonlocal explanation, simply by assuming a common (emission) cause and that the polarizers are analyzing essentially the same thing.
But that's the whole idea that Bell's theorem intends to refute. Bell starts by imagining that the perfect correlation when both experimenters use the same detector setting is due to a common cause--each particle is created with a predetermined answer to what result it will give on any possible angle, and they are always created in such a way that they are predetermined to give opposite answers on each possible angle. But if you do make this assumption, it leads you to certain conclusions about what statistics you'll get when the experimenters choose different detector settings, and these conclusions are violated in QM.

Perhaps it would help if you looked at the example involving scratch lotto cards that I gave on another thread:
The key to seeing why you can't explain the results by just imagining the electrons had preexisting spins on each axis is to look at what happens when the two experimenters pick different axes to measure. Here's an analogy I came up with on another thread (for more info, google 'Bell's inequality'):

Suppose we have a machine that generates pairs of scratch lotto cards, each of which has three boxes that, when scratched, can reveal either a cherry or a lemon. We give one card to Alice and one to Bob, and each scratches only one of the three boxes. When we repeat this many times, we find that whenever they both pick the same box to scratch, they always get opposite results--if Bob scratches box A and finds a cherry, and Alice scratches box A on her card, she's guaranteed to find a lemon.

Classically, we might explain this by supposing that there is definitely either a cherry or a lemon in each box, even though we don't reveal it until we scratch it, and that the machine prints pairs of cards in such a way that the "hidden" fruit in a given box of one card is always the opposite of the hidden fruit in the same box of the other card. If we represent cherries as + and lemons as -, so that a B+ card would represent one where box B's hidden fruit is a cherry, then the classical assumption is that each card's +'s and -'s are the opposite of the other--if the first card was created with hidden fruits A+,B+,C-, then the other card must have been created with the hidden fruits A-,B-,C+.

The problem is that if this were true, it would force you to the conclusion that on those trials where Alice and Bob picked different boxes to scratch, they should find opposite fruits on at least 1/3 of the trials. For example, if we imagine Bob's card has the hidden fruits A+,B-,C+ and Alice's card has the hidden fruits A-,B+,C-, then we can look at each possible way that Alice and Bob can randomly choose different boxes to scratch, and what the results would be:

Bob picks A, Alice picks B: same result (Bob gets a cherry, Alice gets a cherry)

Bob picks A, Alice picks C: opposite results (Bob gets a cherry, Alice gets a lemon)

Bob picks B, Alice picks A: same result (Bob gets a lemon, Alice gets a lemon)

Bob picks B, Alice picks C: same result (Bob gets a lemon, Alice gets a lemon)

Bob picks C, Alice picks A: opposite results (Bob gets a cherry, Alice gets a lemon)

Bob picks C, Alice picks picks B: same result (Bob gets a cherry, Alice gets a cherry)

In this case, you can see that in 1/3 of trials where they pick different boxes, they should get opposite results. You'd get the same answer if you assumed any other preexisting state where there are two fruits of one type and one of the other, like A+,B+,C-/A-,B-,C+ or A+,B-,C-/A-,B+,C+. On the other hand, if you assume a state where each card has the same fruit behind all three boxes, like A+,B+,C+/A-,B-,C-, then of course even if Alice and Bob pick different boxes to scratch they're guaranteed to get opposite fruits with probability 1. So if you imagine that when multiple pairs of cards are generated by the machine, some fraction of pairs are created in inhomogoneous preexisting states like A+,B-,C-/A-,B+,C+ while other pairs are created in homogoneous preexisting states like A+,B+,C+/A-,B-,C-, then the probability of getting opposite fruits when you scratch different boxes should be somewhere between 1/3 and 1. 1/3 is the lower bound, though--even if 100% of all the pairs were created in inhomogoneous preexisting states, it wouldn't make sense for you to get opposite answers in less than 1/3 of trials where you scratch different boxes, provided you assume that each card has such a preexisting state with "hidden fruits" in each box.

But now suppose Alice and Bob look at all the trials where they picked different boxes, and found that they only got opposite fruits 1/4 of the time! That would be the violation of Bell's inequality, and something equivalent actually can happen when you measure the spin of entangled photons along one of three different possible axes. So in this example, it seems we can't resolve the mystery by just assuming the machine creates two cards with definite "hidden fruits" behind each box, such that the two cards always have opposite fruits in a given box.
Imagine that you are the source manufacturing the cards to give to Alice and Bob (the common cause). Do you agree that if the cards cannot communicate and choose what fruit to show based on what box was scratched on the other card (no nonlocality), and if you have no way to anticipate in advance which of the three boxes Alice and Bob will each choose to scratch (no superdeterminism), then the only way for you to guarantee that they will always get opposite results when the scratch the same box is to predetermine the fruit that will appear behind each box A, B, C if it is scratched, making sure the predetermined answers are opposite for the two cards (so if Alice's card has predetermined answers A+,B+,C-, then Bob's card must have predetermined answers A-,B-,C+)? And if you agree with this much, do you agree or disagree with the conclusion that if you predetermine the answers in this way, this will necessarily mean that when they pick different boxes to scratch they must get opposite fruits at least 1/3 of the time?

By the way, I also extended the scratch lotto analogy to a different Bell inequality in post #8 of this thread, if it helps.
 
Last edited:
  • #69
JesseM said:
They do select them independently.

I don't understand what you're asking here.
Of course you don't understand - you like vanesch are only continuing as before without addressing the point I’ve made. You insist there are only three possible angles and expect that to represent “ They do select them independently”. We are not talking about pushing a button independently, were talking about independent selecting the 3 functions to be used for those buttons, without any interference or suggestions from non local site such as the other observer. In your example that means a selection of 6 angles (or at least five). Such as ALICE ( 0, 60, 90)and BOB (0 45 120). I can see allowing one angle (like 0) to be considered to come up the same by chance. But all three, no; that would risk over simplification of the problem to the point of making the conclusinions unreliable. All I've been saying is that this has been oversimplied and leaves the conclusion incomplete.

I see no point in rereading the same explanations of the same thing with the same predetermined restrictions being enforced on the separate, should have been independent, observers.
Maybe the two you believe this binary example is conclusive, IMO it is not.

I will close my input to this thread by requesting a binary opinion choice to clarify our differences and confirm our opinions really are different.

Last year the Kwiat Team at Illinois received and spent over $70,000 in funding on scientific testing aimed at closing “loopholes” in the EPR-Bell question represented in this example:

The Opinion choice is RED OR BLUE. pick only one

THE RED OPINION: And my opinion; agrees with scientists such as those on the Kwiat Team that do not consider any existing proof (including this binary one) conclusive. And that additional funding and experimental work on Bell EPR issues such as the tests at Illionis are justified.

THE BLUE OPINION: Your apparent position; that this binary proof is conclusive. Thus the efforts being expended and any additional funding of scientific testing of EPR-Bell issues is no longer justified. Such experiments as they exist along with this binary proof belong in a undergraduate teaching environment. And advanced labs should be concerned with more important work rather than rehashing old news no one has any doubts about.

Are you guys in fact picking BLUE as your opinion?

That is all I want your choice on this opinion RED or BLUE. No Green, no Gray, no Red&Blue, no explanations.
I'm satisfied that my choice of Red is reasonable and that a significant number of real practicing scientists share it.

If you choice really is Blue;
Please I need no further matrix of explanations. Address your concerns to the active scientist that obviously feel differently as new advanced Bell-EPR type testing efforts continue. If you are successful in convincing any of those doing such testing to publicly agree with you that their testing has been unjustified and future funding of that type is no longer justified; then I’ll know I need to relook at your arguments on this approach. No need to add them to this thread just reference us to any papers you may publish to make your point with the scientists that need to stop wasting their efforts. If the details in your papers are enough to convince the scientific community to change their opinion to BLUE it will be good reading for the rest of us.

I think we have shared more than enough on this with each other.
Other than looking for your opinion choice RED or BLUE I will unsubscribe from this thread.
 
  • #70
RandallB said:
Of course you don't understand - you like vanesch are only continuing as before without addressing the point I’ve made. You insist there are only three possible angles
I'm not insisting there are only three possible angles, it's just a condition of the experiment that the two experimenters agree ahead of time that they will choose between three particular angles, even though there are many other possible angles they might have measured.
RandallB said:
and expect that to represent “ They do select them independently”.
Yes, they choose which of the three independently. Obviously, the three angles that they are choosing between were not themselves selected independently by the experimenters, as I said they made an agreement ahead of time along the lines of "on each trial, we'll always choose one of the three angles 0, 60, 90" or whatever.
RandallB said:
We are not talking about pushing a button independently, were talking about independent selecting the 3 functions to be used for those buttons, without any interference or suggestions from non local site such as the other observer.
What do you mean by "functions"? They could design their experiment so that each of the three buttons automatically set the detector to one of the three angles--button A might set it to 0, button B might set it to 60, and button C might set it to 90. It doesn't make sense to argue about the setup itself, because Bell's proof assumes this sort of setup, and then shows that the results QM predicts the experimenters will get when using this particular setup are inconsistent with local realism. Are you arguing that given this experimental setup, the results predicted by QM are not inconsistent with local realism?
RandallB said:
In your example that means a selection of 6 angles (or at least five). Such as ALICE ( 0, 60, 90)and BOB (0 45 120).
Again, it's just part of the assumed setup that they have each agreed to choose between the same three angles on each trial. If Alice is choosing between 0, 60, and 90, then Bob must have agreed to choose between 0, 60 and 90 as well. So on one trial you might have Alice-60 and Bob-90, on another trial you might have Alice-90 and Bob-0, but there will never be a trial where either of them picks an angle that isn't 0, 60, or 90 (if these are the three angles they have agreed in advance to pick between).
RandallB said:
I will close my input to this thread by requesting a binary opinion choice to clarify our differences and confirm our opinions really are different.

Last year the Kwiat Team at Illinois received and spent over $70,000 in funding on scientific testing aimed at closing “loopholes” in the EPR-Bell question represented in this example:

The Opinion choice is RED OR BLUE. pick only one

THE RED OPINION: And my opinion; agrees with scientists such as those on the Kwiat Team that do not consider any existing proof (including this binary one) conclusive. And that additional funding and experimental work on Bell EPR issues such as the tests at Illionis are justified.

THE BLUE OPINION: Your apparent position; that this binary proof is conclusive. Thus the efforts being expended and any additional funding of scientific testing of EPR-Bell issues is no longer justified. Such experiments as they exist along with this binary proof belong in a undergraduate teaching environment. And advanced labs should be concerned with more important work rather than rehashing old news no one has any doubts about.
I am not addressing the issue of whether actual experiments sufficiently resemble Bell's idealized thought-experiment to constitute experimental refutations of local realism, I'm just talking about theoretical predictions here. In Bell's thought-experiment, Bell's theorem shows definitively that any local realist theory must respect the Bell inequalities, and quantum theory definitively predicts the Bell inequalities will be violated in this experiment. When people talk about "loopholes" in EPR experiments that require better tests, they are pointing out ways in which previous experiments may have fallen short of the ideal thought-experiment (not successfully detecting every pair of particles, for example), they are not arguing that the predicted violations of Bell inequalities by quantum theory don't definitively prove that QM is incompatible with local realism (but experiments are needed to check if QM's predictions are actually correct in the real world). Do you agree that on a theoretical level, Bell's theorem shows beyond a shadow of a doubt that the predictions of QM are inconsistent with local realism?
 
Last edited:
  • #71
ThomasT said:
So, superdeterminism is just a special case of determinism involving Bell's theorem and EPR-Bell tests?

Sort of. Superdeterminism is sometimes offerred as a "solution" to Bell's Theorem that restores locality and realism. The problem is that it replaces it with something which is inifinitely worse - and makes no sense whatsoever. Superdeterminism is not really a theory so much as a concept: like God, its value lies in the eyes of the beholder. As far as I know, no actual working theory has ever been put forth that passes even the simplest of tests.
 
  • #72
ThomasT said:
I'm not sure what you mean by perfect correlation. There is no perfect correlation between coincidence rate and any one angular difference.

Hum, no offense, but I think you totally misunderstood the EPR-Bell type experiments. There's a common source, two opposite arms (or optical fibers or anything) and two experimental setups: Alice's and Bob's.
Each experimental setup can be seen to consist of a polarizing beam splitter which splits the incoming light into an "up" part and a "down" part, and to each of these two channels, there's a photomultiplier. The angle of the polarizing beam splitter can be rotated.

Now, if a photon/lightpulse/... comes in which is "up" wrt to the orientation of the beamsplitter, then that photon is going to make click (ideally) once the "up" photomultiplier, and not the down one. If the photon is in the "down" direction, then it is going to make click the down photomultiplier and not the up one. If the photon is polarized in 45 degrees, then it will randomly or make click the up one and not the down one, or it will make click the down one and not the up one.

So, if a photon is detected, in any case one of both photomultipliers will click at Alice. Never both. That's verified. But sometimes none, because of finite efficiency.

At Bob, we have the same.

Now, we look only at those pulses which are detected both at Alice and Bob: if at Alice something clicks, but not at Bob, we reject it, and also vice versa. This is an item which receives some criticism, but it is due to the finite efficiency of the photomultipliers.

However, what one notices is that if both Alice's and Bob's analyzers are parallel, then EACH TIME there is a click at Alice and and Bob, it is the SAME photomultiplier that clicks on both sides. That is, each time that Alice's "up" photomultiplier clicks, well it is also Bob's "up" multiplier that clicks, NEVER the "down" one. And each time it is Alice's "down" photomultiplier that clicks, well it is also Bob's "down" multiplier that clicks, never his "up" one.

THIS is what one means with "perfect correlation".

This can easily be explained if both photons/lightpulses are always OR perfectly aligned with Bob and Alice's analysers, or "anti-aligned". But any couple of photons that would be "in between", say at 45 degrees, will hard to explain in a classical way: if each of them has 50-50% chance to go up or down, at Alice or Bob, why do they do *the same thing* at both sides ?

Moreover, this happens (with the same source) for all angles, as long as Alice's and Bob's are parallel. So if the source was emitting first only perfectly aligned and anti-aligned photon pairs with Alice's and Bob's angles both at, say, 20 degrees (so they are parallel), then it is hard to explain that when both Alice and Bob turn their angles at, say, 45 degrees, they STILL find perfect correlation with classical optics, no ?
 
Last edited:
  • #73
RandallB said:
Of course you don't understand - you like vanesch are only continuing as before without addressing the point I’ve made. You insist there are only three possible angles and expect that to represent “ They do select them independently”. We are not talking about pushing a button independently, were talking about independent selecting the 3 functions to be used for those buttons, without any interference or suggestions from non local site such as the other observer.

But we're not talking about angles here ! I'm talking you about a thought experiment with a box which has 3 buttons! No photons. No polarizers.

We just have a black box machine of which we don't know how it works, but we suppose that it complies to some general ideas (the famous Bell assumptions of locality etc...)
Just 3 buttons on each side, labeled A, B or C, an indicator that the experiment is ready, and a red and a green light.

And then the conditions of functioning, that each time Alice and Bob happen to push the same button, they ALWAYS find that the same light lights up. It never happens that Alice and Bob both push the button C, and at Alice the green light lights up, and at Bob, the red one.
And also the condition that over a long run, at Alice, for the cases where she pushed A, she got on average about 50% red and 50% green, in the cases where she pushed B, the same, and in cases where she pushed C, the same.

These are the elements GIVEN for a thought experiment. It's the description of a thinkable setup. I can build you one with a small processor and a few wires and buttons which does this, so it is not an impossible setup.

The question is, what can we derive as conditions for the list of events where Alice happened to push A, and Bob happened to push B. And for the other list where Alice
happened to push B, and Bob happened to push C. etc...

THIS is the derivation of Bell's theorem (or rather, of Bell's inequalities). He derives some conditions on those lists, given the setup and given the conditions.

IT IS A REASONING ON PAPER. So I'm NOT talking about any *experimental* observationsj in the lab. That's a different story.

Now, it is true of course that the "setup" here corresponds more or less to a setup of principle where there is a common "emitter of pairs of entangled particles", and then two experimental boxes where one can choose between 3 settings of angles (that's equivalent to pushing A, B or C), and get a binary output each time (red or green). It is this idealized setup which is quantified in a simple quantum-mechanical calculation, and which is (more or less well) approximated by real experiments. So these "experimental physics" issues are of course the inspiration for our reasoning. But I repeat: the reasoning presented here has a priori nothing to do with particles, angles, polarizers or anything: just with a black box setup which has certain properties, and of which we try to deduce other properties, under a number of assumptions of the workings of the black box.

So I can answer your "red or blue" question: concerning GENUINE EXPERIMENTS, of course it is a good idea to try to bring the experiment closer to the ideal situation, which is still relatively far away. So yes, in as much as the proposed experiments are indeed improvements, it is a good idea to fund them. But that's a question of how NATURE behaves (does it deviate from quantum mechanics or not).

However, concerning the *formal reasoning*, no there is not much doubt. Quantum mechanics (as a theory) is definitely not compatible with the assumptions of Bell.

In about the same way as that there is not much doubt that Pythagoras' theorem follows from the assumptions (axioms) of Euclidean geometry. Whether or not "real space" is well-described by that Euclidean model. So in as much as spending money on an experiment that tests whether physical space follows or not, the Euclidean prescription might be sensible, spending money to see whether Pythagoras' theorem (on paper) follows from the Euclidean axioms would, I think, be some waste. And there is no link between both! It is not because in real physical space, Euclidean axioms are not correct, that suddenly, Pythagoras' proof from Euclid's axioms is wrong!
 
Last edited:
  • #74
JesseM said:
Imagine that you are the source manufacturing the cards to give to Alice and Bob (the common cause). Do you agree that if the cards cannot communicate and choose what fruit to show based on what box was scratched on the other card (no nonlocality), and if you have no way to anticipate in advance which of the three boxes Alice and Bob will each choose to scratch (no superdeterminism), then the only way for you to guarantee that they will always get opposite results when the scratch the same box is to predetermine the fruit that will appear behind each box A, B, C if it is scratched, making sure the predetermined answers are opposite for the two cards (so if Alice's card has predetermined answers A+,B+,C-, then Bob's card must have predetermined answers A-,B-,C+)?

The nice thing about the proof presented earlier in this thread (which some here don't seem to understand) is that a priori, one even leaves in place the possibility of some random element in the generation of the results and that the assumption of locality only means that the *probability* of having a cherry or a banana is determined, but not necessarily the outcome, and that it FOLLOWS from the above requirement of perfect (anti-) correlation that they must be pre-determined.

I say this because sometimes a (senseless) objection to Bell's argument is that he *assumes* determinism. No assumption of determinism is necessary, but it FOLLOWS that the probabilities must be 0 or 1 (once common information is taken into account) from the perfect correlation.
 
  • #75
DrChinese said:
Superdeterminism is sometimes offerred as a "solution" to Bell's Theorem that restores locality and realism.

From a logical point of view it is a solution. No quotes needed.

The problem is that it replaces it with something which is inifinitely worse - and makes no sense whatsoever.

I've heard many times statements like these but I've heard no valid argument against superdeterminism. Can you present such an argument?

Superdeterminism is not really a theory so much as a concept: like God, its value lies in the eyes of the beholder.

Superdeterminism is nothing but the old, classical determinism with a requirement of logical consistency added.

As far as I know, no actual working theory has ever been put forth that passes even the simplest of tests.

This is true, but it says nothing about the possibility that such a theory might exist.
 
  • #76
ueit said:
From a logical point of view it is a solution. No quotes needed.

I've heard many times statements like these but I've heard no valid argument against superdeterminism. Can you present such an argument?

Logically, it is true that there is no argument that in a deterministic theory, superdeterminism is not supposed to hold. After all, everything in a deterministic frame is a function of the initial conditions, which can always be picked in exactly such a way as to obtain any correlation you want. It is based on such kind of reasoning that astrology has a ground.

However, as I pointed out already a few times, it is an empirical observation that things which don't seem to have a direct or indirect causal link happen to be statistically independent. This is the only way that we can "disentangle" cause-effect relationships (namely, by observing correlations between the "randomly" selected cause, and the observed effect). In other words, "coincidences" obey statistical laws.

It is sufficient that one single kind of phenomenon doesn't follow this rule, and as a consequence, no single cause-effect relationship cannot be deduced anymore. Simply because this single effect can always be included in the "selection chain" of any cause-effect relationship and hence "spoil" the statistical independence in that relationship.

So *if* superdeterminism is true, then it is simply amazing that we COULD deduce cause-effect relationships at all, in just any domain of scientific activity. Ok, this could be part of the superdeterminism too, but it would even be MORE conspirational: superdeterminism that mimics as determinism. Call it "hyperdeterminism" :smile:

Now that I come to think of it, it could of course explain a lot of crazy things that happen in the world... :-p
 
  • #77
Having looked up where it was brought up, Super-determinism is not a special case of determinism at all and is actually a fairly simple fourth assumption.

Such a term shouldn't even be used to describe this possibility, it is actually a whole other assumption that is unrelated to the other 3. Perhaps the name is just a way to try and hide this fact.

The assumption being referred to is that there was not something that occurred in the past that both caused the person to choose the detection settings and caused the particles to behave in such a way.

The implications of that being the case are a little farfetched, but other than that it is just plain old determinism. It is not the same thing as objective reality assumption, since it could just be in this one case.
 
Last edited:
  • #78
krimianl99 said:
since it could just be in this one case.

No, not really. If there is an influence that STRONGLY CORRELATES just ANY technique that I use to make the choices at Bob, the choices at Alice, and the particles sent out, then this means that there are such correlations EVERYWHERE. As I said in another post, nothing stops me in principle from using the selection for medecine/placebo in a medical test determine at the same time the settings at Bob, and use the results of the medical tests on another set of ill people at Alice. If there have to be correlations between the choices of Alice and Bob in all cases, then also in THIS case and hence between any medecine/placebo selection procedure on one hand, and medical tests on another.

But this would mean that any correlation between the outcome of a medical test and whether or not a person received a treatment is never a proof of the medicine working, as I found such correlations already between two DIFFERENT sets of patients (namely those at Bob to get the medecine on one hand, and those at Alice who, whether they got better or not, determined Alice's choice).
 
  • #79
vanesch said:
If there is an influence that STRONGLY CORRELATES just ANY technique that I use to make the choices at Bob, the choices at Alice, and the particles sent out, then this means that there are such correlations EVERYWHERE.

How so? Just because you don't know what could cause such a correlation doesn't mean the correlation would always be there just because it is there when tested. Maybe an event that causes the particles to become entangled radiates a mind control wave to the experimenters but only at times where the particles are about to be tested. It's not much more far fetched then the whole thing to start.

It's just a fourth assumption that has nothing to do with the others. Furthermore it illustrates my point about the differences between the limits of induction and people just making mistakes with deduction.

With more experiences, different points of view, and a lot of practice understanding the limits of induction and using them, the human race can definitely reduce uncertainty caused by the limits of induction.

But that is TOTALLY different than checking for errors in DEDUCTIVE reasoning. In one case you are checking for something similar to a typo, and in the other you are being totally paranoid that anything that you haven't already thought of could be going on.

In a real proof, all induction is limited to the premises. As long as you don't have a reason to doubt the premises, the proof holds. In so called "proof" by negation the whole thing is subject to the limits of induction.
 
Last edited:
  • #80
krimianl99 said:
In a real proof, all induction is limited to the premises. As long as you don't have a reason to doubt the premises, the proof holds. In so called "proof" by negation the whole thing is subject to the limits of induction.
What do you mean here? Would you deny that "proof by contradiction" is a deductive argument rather than an inductive one? It's often used in mathematics, for example (look at this proof that there is no largest prime number). And Bell's theorem can be understood as a purely theoretical argument to show that certain classes of mathematical laws cannot generate the same statistical predictions as the theory of QM.
 
  • #81
Ian Davis said:
True, but I find just as intriguing the question as to which way that same pulse of light is traveling within the constructed medium. The explanation that somehow the tail of the signal contains all the necessary information to construct the resulting complex wave observed, and the coincidence that the back wave visually intersects precisely as it does with the entering wave, without in any way interferring with the arriving wave, seems to me a lot less intuitive than that the pulse on arriving at the front of the medium travels with a velocity of ~ -2c to the other end, and then exits. The number 2 of all numbers also seems strange. Why not 1. It seems a case where we are willing to defy occams razor in order to defend apriori beliefs. How much energy is packed into that one pulse of light, and how is this energy to be conserved when that one pulse visually becomes three. From where does that tail of the incoming signal derive the strength to form such a strong back signal. Is the signal fractal in the sense that within some small part is the description of the whole. Questions I can't answer not being a physicist, but still questions that trouble me with the standard explanations given, about it all being smoke and mirrors.

Likewise I find Feynmans suggestion that spontaneous construction and destruction of positron electron pairs is our world view is in reality electrons changing direction in time as consequence of absorbing/emitting a photon both intriguing and rather appealing.

It does seem that our reluctance to have things move other than forwards in time means that we must jump through hoops to explain why despite appearances things like light, electrons and signals cannot move backwards in time. My primary interest is in the question of time itself. I'm not well equipped to understand answers to this question, but it seems to me that time is the one question most demanding and deserving of serious thought by physicists even if that thought produces no subsequent answers.

Re Feynman; the notion of going backwards in time is simply a metaphor. It turns out that the manipulations required to make a Dirac Hamiltonian with only positive energy eigenvalues, are, equivalent to having negative energy solutions travel backwards in time -- this is nothing more than setting (-E)) into (+E),in the expression exp(iEt). If you go back and review first the old-fashioned perturbation theory, and its successor, modern covariant field theory, you can see very clearly the origin's of Feynman's metaphor. Among other things, you will see how the old-fashioned perturbation theory diagrams combine to produce the usual covariant Feynman diagrams of, say the Compton Effect. You will get a much better idea of what "backwards in time" brings to the table -- in my judgment the idea is a creative fiction, but a very powerful one.

QFT is a somewhat difficult subject. To get even a basic understanding you need to deal with both the technical as well as the conceptual aspects. I highly recommend Weinberg's Chapter I of Vol I of Quantum Theory of Fields -- he gives a good summary of the basic stuff you need to know to start to understand QFT. Quite frankly, the physics community embraced Feynman's metaphor rather quickly -- along with the Schwinger and Tomonaga versions -- almost100%, and became part of "what everybody knows", as in tacit physics knowledge, as in no big deal. As a metaphor, Feynman's idea is brilliant and powerful; as a statement describing reality it is at least suspect.

Does the usual diagram of an RLC circuit mirror the physical processes of the circuit?

Regards,
Reilly Atkinson
 
Last edited:
  • #82
ueit said:
From a logical point of view it is a solution. No quotes needed.

I've heard many times statements like these but I've heard no valid argument against superdeterminism. Can you present such an argument?This is true, but it says nothing about the possibility that such a theory might exist.

There is no theory called superdeterminism which has anything to do with particle theory. There is an idea behind it, but no true theory called something like "superdeterministic quantum theory" exists. That is why quotes are needed. You cannot negate a theory which assumes that which it seeks to prove.

Note that superdeterminism is a totally ad hoc theory with no testable components. Adds nothing to our knowledge of particle behavior. And worse, if true, would require that every particle contain a complete history of the entire universe so it would be capable of matching the proper results for Bell tests - while remaining local.

In addition, there would need to be connections between forces - such as between the weak and the electromagnetic - that are heretofor unknown and not a part of the Standard Model. That is because superdeterminism would lead to all kinds of connections and would itself impose constraints.

Just as Everett's MWI required substantial work to be fleshed into something that could be taken seriously, and Bohm's mechanics is still being worked on, the same would be required of a "superdeterministic" theory before it would really qualify as viable. I have yet to see a single paper published which seriously takes apart the idea of superdeterminsm in a critical manner and builds a version which meets scientific rigors.

Here is a simple counter-example: The detectors of Alice and Bob are controlled by an algorithm based on radioactive decay of separate uranium samples. Thus, randomness introduced by the weak force (perhaps the time of decay) controls the selection of angle settings. According to superdeterminism, those separated radioactive samples actually independently contain the blueprint for the upcoming Bell test and work together (although locally) to insure that what appears to be a random event is actually connected.

Please, don't make me laugh any harder. :)
 
  • #83
vanesch said:
Logically, it is true that there is no argument that in a deterministic theory, superdeterminism is not supposed to hold. After all, everything in a deterministic frame is a function of the initial conditions, which can always be picked in exactly such a way as to obtain any correlation you want.

This is not what I have in mind. I don't see EPR explained by the initial conditions, but by a new law of physics that holds regardless of those parameters.

It is based on such kind of reasoning that astrology has a ground.

May be but that is not what I propose.

However, as I pointed out already a few times, it is an empirical observation that things which don't seem to have a direct or indirect causal link happen to be statistically independent.

1. In order to suspect a causal link you need a theory. In Bohm's theory the motion of one particle directly influences the motion of another no matter how far. The transactional interpretation has an absorber-emitter information exchange that goes backwards in time. None of these is obvious or intuitive. But we accept them (more or less) because the theory say that. Now, if a theory says that emission and absorbtion events share a common cause in the past then they "seem" causally related because the theory says it is so.

2. We are speaking about microscopic events. We have no direct empirical observation about this world and Heisenberg uncertainty introduces more limitations. So clearly you need a theory first and then to decide what is causaly related and what it is not.

This is the only way that we can "disentangle" cause-effect relationships (namely, by observing correlations between the "randomly" selected cause, and the observed effect). In other words, "coincidences" obey statistical laws.

So you wouldn't believe that a certain star can produce a supernova explosion until you "randomly" select a star and start throwing matter in it, right?

It is sufficient that one single kind of phenomenon doesn't follow this rule, and as a consequence, no single cause-effect relationship cannot be deduced anymore.

I strongly disagree. This is like saying that if a non-local theory is true then we cannot do science anymore because our experiments might be influenced by whatever a dude in another galaxy is doing. All interpretations bring some strange element but this is not necessarily present in an obvious way at macroscopic level.

Simply because this single effect can always be included in the "selection chain" of any cause-effect relationship and hence "spoil" the statistical independence in that relationship.

So, please show me how the assumption that any emitter-absorber pair has a common "ancestor" "spoils" the statistical independence in a medical test. I think you will need to also assume that a patient has all the emitters and the medic all the absorbers (or at least most of them) to deduce such a thing. But maybe you have some other proof in mind.

So *if* superdeterminism is true, then it is simply amazing that we COULD deduce cause-effect relationships at all, in just any domain of scientific activity. Ok, this could be part of the superdeterminism too, but it would even be MORE conspirational: superdeterminism that mimics as determinism. Call it "hyperdeterminism" :smile:

I think you are using a double-standard here. All interpretations have this kind of conspiracy. We have a non-determinismic theory that mimics as determinism, a non-local theory that mimics as local and a multiverse theory that mimics as a single, 4D-universe theory. Also there is basically no difference between determinism and superdeterminism except the fact that the first one can be proven to be logically inconsistent.

I think that the main error in your reasoning comes from a huge extrapolation from microscopic to classical domain. You may have statistical independence at macroscopic level in a superdeterministic theory just like you can have a local universe based on a non-local fundamental theory.
 
  • #84
ueit said:
1. In order to suspect a causal link you need a theory. In Bohm's theory the motion of one particle directly influences the motion of another no matter how far. The transactional interpretation has an absorber-emitter information exchange that goes backwards in time. None of these is obvious or intuitive. But we accept them (more or less) because the theory say that. Now, if a theory says that emission and absorbtion events share a common cause in the past then they "seem" causally related because the theory says it is so.

Yes, and it is about that class of theories that Bell's inequalities tell us something.

2. We are speaking about microscopic events.

No, we are talking about black boxes with choices by experimenters, binary results, and the correlations between those binary results as a function of the choices of the experimenter. Bell's inequalities are NOT about photons, particles or anything specific. They are about the link that there can exist between *choices of observers* on one hand, and *correlations of binary events* on the other hand.

We have no direct empirical observation about this world and Heisenberg uncertainty introduces more limitations. So clearly you need a theory first and then to decide what is causaly related and what it is not.

We consider a *class* of theories: namely those that are local, in which we do not consider superdeterminism of the kind that a distant choice by a distant observer can have a statistical correlation with a choice of a local observer, and with an eventual "central source" (given locality, this can then only happen through "special initial conditions"), and we consider that there are genuine binary outcomes each time. We consider now that whatever theory is describing the functioning of our black box experiment, it is part of this class of theories. Well, if that's the case, then there are relations between certain correlations one can observe that way. The particular relation that interests us here is that where it is given that for identical choices (Alice A ,and bob A for instance), the correlation is complete.
It then turns out that one has conditions on the OTHER correlations.

I strongly disagree. This is like saying that if a non-local theory is true then we cannot do science anymore because our experiments might be influenced by whatever a dude in another galaxy is doing.

But this is TRUE! The only way in which Newtonian gravity gets out of this, is because influences are diminishing with distance. If gravity weren't falling in 1/r^2, but go, say, as ln(r), it would be totally impossible to deduce the equivalent of Newton's laws, ever!


I think that the main error in your reasoning comes from a huge extrapolation from microscopic to classical domain. You may have statistical independence at macroscopic level in a superdeterministic theory just like you can have a local universe based on a non-local fundamental theory.

Of course! The only thing Bell is telling us, is that given the quantum-mechanical predictions, it will not be possible to do this with a non-superdeterministic, local, etc... theory. That's ALL.
 
  • #85
krimianl99 said:
How so? Just because you don't know what could cause such a correlation doesn't mean the correlation would always be there just because it is there when tested. Maybe an event that causes the particles to become entangled radiates a mind control wave to the experimenters but only at times where the particles are about to be tested. It's not much more far fetched then the whole thing to start.

Well, indeed, we make the assumption that no such thing happens. That's the assumption of no superdeterminism: that there is no statistical correlation between the brain of an experimenter making a choice, and the emission of a pair of particles.
 
  • #86
DrChinese said:
Note that superdeterminism is a totally ad hoc theory with no testable components. Adds nothing to our knowledge of particle behavior.

Rather like the "theory" of intelligent design in biology.
 
  • #87
vanesch said:
Well, indeed, we make the assumption that no such thing happens. That's the assumption of no superdeterminism: that there is no statistical correlation between the brain of an experimenter making a choice, and the emission of a pair of particles.

Vanesch, that's nice of you to say. (And I mean that in a good way.)

But really, I don't see that "no superdeterminism" is a true assumption any more than it is to assume that "the hand of God" does not personally intervene to hide the true nature of the universe each and every time we perform an experiment. There must be a zillion similar assumptions that could be pulled out of the woodwork ("the universe is really only 10 minutes old, prior history is an illusion"). They are basically all "ad hoc", and in effect, anti-science.

In no way does the presence or absence of this assumption change anything. Anyone who wants to believe in superdeterminism can, and they will still not have made one iota of change to orthodox quantum theory. The results are still as predicted by QT, and are different than would have been expected by EPR. So the conclusion that "local realism holds" ends up being a Pyrrhic victory (I hope I spelled that right).
 
  • #88
DrChinese said:
Vanesch, that's nice of you to say. (And I mean that in a good way.)
But really, I don't see that "no superdeterminism" is a true assumption any more than it is to assume that "the hand of God" does not personally intervene to hide the true nature of the universe each and every time we perform an experiment. There must be a zillion similar assumptions that could be pulled out of the woodwork ("the universe is really only 10 minutes old, prior history is an illusion"). They are basically all "ad hoc", and in effect, anti-science.

Well, the aim of what I wanted to show in this thread is that there is a logical conclusion that one can draw from a certain number of assumptions (and the meta-assumptions that logic and so on hold of course). Whether these assumptions are "reasonable", "evident" or whatever doesn't change anything to the fact that they are necessary or not in the logical deduction. And one needs the assumption of no superdeterminism in two instances:
1) When one writes that the residual uncertainty of the couple, say, (red,red) is the product of the probability to have red at Alice (taking into account the common information) and the product of the probability to have red at Bob: in other words the statistical independence of these two unrelated events
2) When one assumes that the distribution of lambda itself is statistically independent of the residual probabilities (we can weight D over this probability) and the choices at Alice and Bob (so is no function of X and Y).

This is like showing Pythagoras' theorem: it is not because you might find the fifth axiom of Euclid "so evident as not to be a true assumption", that you don't need it in the logical proof!
 
Last edited:
  • #89
vanesch said:
... the assumption of no superdeterminism: that there is no statistical correlation between the brain of an experimenter making a choice, and the emission of a pair of particles.
Could you put this in observable terms? Something like, the assumption is made (in the formulation of Bell inequalities) that there is no connection between a pair of polarizer settings and the paired detection attributes associated with those polarizer settings?

I feel like I'm getting farther and farther away from clarifying this stuff for myself, and I still have to respond to some replies to my queries by you and Jesse. And thanks, by the way.

Anyway, the experimental results seem to make very clear the (quantitative) relationship between joint polarizer settings and associated joint detection attributes. The theoretical approach and test preparation methods inspired by quantum theory yields a very close proximity between qm predictions and results. This quantum theoretical approach assumes a common cause, and involves the instrumental analysis-filtering of (assumed) like physical entities by like instrumental analyzers-filters, and timer-controlled pairing techniques.

We know that there is a predictable quantitative relationship between joint polarizer settings and pairs of appropriately associated joint detection attributes. And so, it's assumed that there is a qualitative relationship also. This is the basis for the assumption of common cause and common filtration of common properties.

The experimental violation of Bell inequalities has shown that the assumptions of common cause and common filtration of common properties can't be true if one uses a certain sort of predictive formulation wherein one also assumes that events at A and B (for any given set of paired detection attributes) are independent of each other, and the probability of coincidental detections in this case would be the product of the separate probabilities at A and B. Of course, the experimental design(s) necessary to produce entanglement preclude such independence -- and the quantum mechanical predictive formulation in association with the first two (common cause) assumptions considered in light of the experimental results supports the common cause (and therefore similar or identical disturbances moving from emitter to filter during any given coincidence interval) assumption(s).
 
Last edited:
  • #90
Originally Posted by ThomasT
I'm not sure what you mean by perfect correlation. There is no perfect correlation between coincidence rate and anyone angular difference.

vanesch said:
Hum, no offense, but I think you totally misunderstood the EPR-Bell type experiments.
No offense taken. I realize that I can be a bit, er, dense at times. I'm here to learn, to pass on anything that I have learned and think is ok, and especially to put out here for criticism any insights that I think I might have. I very much appreciate you mentors and advisors, etc., taking the time to explain things.
vanesch said:
There's a common source, two opposite arms (or optical fibers or anything) and two experimental setups: Alice's and Bob's.
Each experimental setup can be seen to consist of a polarizing beam splitter which splits the incoming light into an "up" part and a "down" part, and to each of these two channels, there's a photomultiplier. The angle of the polarizing beam splitter can be rotated.

Now, if a photon/lightpulse/... comes in which is "up" wrt to the orientation of the beamsplitter, then that photon is going to make click (ideally) once the "up" photomultiplier, and not the down one. If the photon is in the "down" direction, then it is going to make click the down photomultiplier and not the up one. If the photon is polarized in 45 degrees, then it will randomly or make click the up one and not the down one, or it will make click the down one and not the up one.

So, if a photon is detected, in any case one of both photomultipliers will click at Alice. Never both. That's verified. But sometimes none, because of finite efficiency.

At Bob, we have the same.

Now, we look only at those pulses which are detected both at Alice and Bob: if at Alice something clicks, but not at Bob, we reject it, and also vice versa. This is an item which receives some criticism, but it is due to the finite efficiency of the photomultipliers.

However, what one notices is that if both Alice's and Bob's analyzers are parallel, then EACH TIME there is a click at Alice and and Bob, it is the SAME photomultiplier that clicks on both sides. That is, each time that Alice's "up" photomultiplier clicks, well it is also Bob's "up" multiplier that clicks, NEVER the "down" one. And each time it is Alice's "down" photomultiplier that clicks, well it is also Bob's "down" multiplier that clicks, never his "up" one.

THIS is what one means with "perfect correlation".
Ok. I understand what you're talking about wrt perfect correlation now. This is only applicable when the analyzers are aligned. And, in this case we're correlating 'up' clicks at A with 'up' clicks at B and 'down' clicks at A with 'down' clicks at B.

vanesch said:
This can easily be explained if both photons/lightpulses are always OR perfectly aligned with Bob and Alice's analysers, or "anti-aligned". But any couple of photons that would be "in between", say at 45 degrees, will hard to explain in a classical way: if each of them has 50-50% chance to go up or down, at Alice or Bob, why do they do *the same thing* at both sides ?
Classically, if the analyzers are aligned and they're analyzing the same optical disturbance, then you would expect just the results that you get. Quantum mechanics gets around the problem of disturbance-filter angular relationship by not saying anything about specific emission angles. It just says that if the optical disturbances are emitted in opposite directions, then the analyzers will be dealing with essentially the same thing(s). And classical optics tells us that if the light between analyzer A and analyzer B is of a sort, then the results at detector A and detector B will be the same for any given set of paired detection attributes: if there's a detection at A then there will be a detection at B, and if there's no detection at A, then there will be no detection at B.

vanesch said:
Moreover, this happens (with the same source) for all angles, as long as Alice's and Bob's are parallel. So if the source was emitting first only perfectly aligned and anti-aligned photon pairs with Alice's and Bob's angles both at, say, 20 degrees (so they are parallel), then it is hard to explain that when both Alice and Bob turn their angles at, say, 45 degrees, they STILL find perfect correlation with classical optics, no ?
We don't know what the source is emitting. From the experimental results, there's not much that can be said about it. But the assumption is made that the analyzers are analyzing the same thing at both ends during any given coincidence interval.

I think that one can get an intuitive feel for why the quantum mechanical predictions work by viewing them from the perspective of the applicable classical optics laws and experiments. Don't you think so?
 

Similar threads

  • · Replies 50 ·
2
Replies
50
Views
7K
  • · Replies 75 ·
3
Replies
75
Views
11K
  • · Replies 66 ·
3
Replies
66
Views
7K
  • · Replies 87 ·
3
Replies
87
Views
8K
  • · Replies 82 ·
3
Replies
82
Views
10K
  • · Replies 197 ·
7
Replies
197
Views
32K
  • · Replies 40 ·
2
Replies
40
Views
2K
Replies
35
Views
796
  • · Replies 11 ·
Replies
11
Views
5K
  • · Replies 190 ·
7
Replies
190
Views
15K