Entanglement and observations

In summary, the conversation discusses the idea that observing a particle in a superposed state entangles the observer with that particle. This concept is compared to entanglement and the many world interpretation in quantum mechanics. The act of observation is seen as a process that destroys the state of entanglement, but only for the observer. There is also mention of the possibility that everything in the world is entangled, which raises questions about the laws of thermodynamics.
  • #36
mn4j said:
Thanks for mentioning this. I wasn't aware of this paper as they only cite the original paper in a footnote( difficult to track). However, this paper is hardly a rebutt. I'll characterise it as a "musing". After reading this paper I believe the author has misunderstood the work of Hess and Philipp, probably based on a basic misunderstanding of probability theory. The author claims he is working on a detailed rebutt of Hess and Philipp. I'm waiting to read that.

There have however been other disproofs of Bell's theorem independent of Hess and Philipp:
* Disproof of Bells theorem by Clifford Algebra Valued Local Variables. http://arxiv.org/pdf/quant-ph/0703179.pdf
* Disproof of Bell's Theorem: Reply to Critics. http://arxiv.org/pdf/quant-ph/0703244.pdf
* Disproof of Bell's Theorem: Further Consolidations. http://arxiv.org/pdf/0707.1333.pdf

These are sophisticated ways of saying that they didn't understand what Bell was claiming. The simplest form of Bell's theorem can be found by selecting simply 3 angular directions. As such, no sophisticated modelling, no Clifford or other algebras, simply the following:

Consider 3 angular settings, A, B and C.

Now, give me, as a model, the "hidden variable" probabilities for the 8 possible cases:

hidden state "1" : A = down, B = down, C = down ; probability of hidden state 1 = p1
hidden state "2": A = down, B = down, C = up ; probability of hidden state 2 = p2
hidden state "3": A = down, B = up, C = down ; probability of hidden state 3 = p3
...
hidden state "8" : A = up , B = up, C = up ; probability of hidden state 8 = p8.

In the above, "A = down" means: in the hidden state that has A = "down", we will measure, with certainty, a "down" result if observer 1 applies this hidden = state to a measurement in the direction "A".

Because there is perfect anti-correlation when observer 1 and observer 2 measure along the same direction, we can infer that A = down means that when this state is presented to observer 2, he will find with certainty the "up" result if he measures along the axis A.

You don't need to give me any "mechanical model" that produces p1, ... p8. Just the 8 numbers, such that 1 > p1 > 0 ; 1 > p2 > 0 ... ; 1 > p8 > 0, and p1 + p2 + ... + p8 = 1 ; in other words, {p1,... p8} form a probability distribution over the universe of the 8 possible hidden states which interest us.

If we apply the above hidden variable distribution to find the correlation between the measurement by observer 1 along A, and by observer 2 along B, we find:

for hidden state 1: correlation = -1 (obs. 1 finds down, obs. 2 finds up)
for hidden state 2: correlation = -1
for hidden state 3: correlation = 1 (both find down)
for hidden state 4: correlation = 1 (both find down)
for hidden state 5: correlation = 1 (both find up)
for hidden state 6: correlation = 1 (both find up)
for hidden state 7: correlation = -1
for hidden state 8: correlation = -1.

So we find that the correlation is given by
C(A,B) = p3 + p4 + p5 + p6 - p7 - p8 -p1 - p2

We can work out, that way:
C(A,C)

C(B,A)

C(B,C)

C(C,A)

and

C(C,B).

They are sums and differences of the numbers p1 ... p8.

Well, the point of Bell's theorem is that you cannot find 8 such numbers p1, p2, ...
which give the same results for C(X,Y) as do the quantum predictions for C(X,Y) when the directions are 0 degrees, 45 degrees and 90 degrees (for a spin-1/2 system).

So you can find the most sophisticated model you want. In the end, you have to come up with 8 numbers p1, ... p8, which are probabilities. And then you cannot obtain the quantum correlations. The model doesn't matter. You don't even need a model. You only need 8 numbers. And you can't give them to me, because they don't exist.

If you think you have a model that shows that Bell was wrong, give me the 8 probabilities p1, ... p8 you get out of it and show me how they give rise to the quantum correlations.
 
Last edited:
Physics news on Phys.org
  • #37
mn4j said:
Then the textbooks are wrong. Let's take the intensity down to the point where you get a single photon on the screen. Do you get a diffraction pattern? Of course NOT! You get a single speck. Experiment in fact proves that the diffraction pattern is built up slowly from individual specks and you need a significant number of photons/electrons to start noticing the appearance of a pattern.

I think you misunderstood Micha. He pointed out that if it takes about half an hour on average between each impact, that it is difficult to claim that your Scenario 2 oscillation is still going on. If you do this now for 10 years, you will nevertheless build up the same diffraction pattern as if you did this with an intense flash of laser light in 10 nanoseconds.

So the diffraction pattern must be explained "photon per photon" and cannot be based upon the "previous photon".
 
  • #38
vanesch said:
Now, give me, as a model, the "hidden variable" probabilities for the 8 possible cases:

hidden state "1" : A = down, B = down, C = down ; probability of hidden state 1 = p1
hidden state "2": A = down, B = down, C = up ; probability of hidden state 2 = p2
hidden state "3": A = down, B = up, C = down ; probability of hidden state 3 = p3
...
hidden state "8" : A = up , B = up, C = up ; probability of hidden state 8 = p8.

I work this out further. We will first make the extra hypothesis that for a single measurement direction, the probability for "up" is the same as for "down". This goes for direction A, B and C.

From this result several equations for the p-values:

p1 + p2 + p3 + p4 = 1/2 ; p5 + p6 + p7 + p8 = 1/2 (A-up and A-down both 50%)

p1 + p2 + p5 + p6 = 1/2 ; p3 + p4 + p7 + p8 = 1/2 (B-up and B-down both 50%)

p1 + p3 + p5 + p7 = 1/2 ; p2 + p4 + p6 + p8 = 1/2 (C-up and C-down both 50%)

We can, from this, deduce that the set of 8 p-values can be reduced to 4 independent degrees of freedom, which we choose to be p1, p2, p3 and p7.

Some algebra leads then to:
p4 = 1/2 - p1 - p2 - p3
p5 = 1/2 - p1 - p3 - p7
p6 = -p2 + p3 + p7
p8 = p1 + p2 - p7.

If we work out the correlations in our independent variables, we find:

C(A,B) = 1 - 4p1 - 4p2
C(B,C) = 4p3 +4p7 - 1
C(A,C) = 1 - 4p1 - 4p3

Now, let's turn to quantum mechanics. For a perfect Bell state |up> |down> - |down> |up> (spin-1/2 systems), we can easily deduce that for two analyzer angles th1 and th2, the correlation is given by:

Cqm(th1,th2) = 2 sin^2({th1-th2}/2) - 1

This verifies easily for perfect parallel (th1 = th2, th=0) analyzers: Cqm(th1,th1) = -1, perfect anticorrelation, and for perfect anti-parallel (th1 - th2 = 180 degrees) analyzers: Cqm = +1, perfect correlation (if one has spin up, the other has spin up for sure too).

For 90 degrees, we find 0 correlation: th1 - th2 = 90 degrees -> Cqm = 0.


If we put now the 3 angles: A: th1 = 0 degrees, B: th1 = 45 degrees, C: th1 = 90 degrees as our 3 angular directions, then we find the following quantum predictions:
Cqm(A,B) = Cqm(B,C) = -1/sqrt(2)
Cqm(A,C) = 0

Now, let us see if we can find numbers p1, p2, p3 and p7 that can satisfy these 3 expressions:
-1/sqrt(2) = 1 - 4p1 - 4p2
-1/sqrt(2) = 4p3 +4p7 - 1
0 = 1 - 4p1 - 4p3

We have 4 degrees of freedom, and 3 equations, so we can write now p2, p3 and p7 as a function of p1:

p2 = 1/8 (2 + sqrt(2) - 8p1)
p3 = 1/4 (1 - 4 p1)
p7 = 1/8 (8 p1 - sqrt(2))

from which it follows that:

p4 = p1 - 1/(4 sqrt(2))
p5 = 1/8 (2 + sqrt(2) - 8 p1)
p6 = p1 - 1/(2 sqrt(2))
p8 = 1/4(1+sqrt(2) - 4 p1)

From p3 follows that p1 < 1/4.

From p6 follows that p1 > 1/(2 sqrt(2))

But 1/(2 sqrt(2)) > 1/4, so we can never find a p1 that satisfies both inequalities to hvae both p3 and p6 a positive number.

QED.
 
  • #39
vanesch said:
Well, the point of Bell's theorem is that you cannot find 8 such numbers p1, p2, ...
which give the same results for C(X,Y) as do the quantum predictions for C(X,Y) when the directions are 0 degrees, 45 degrees and 90 degrees (for a spin-1/2 system).

So you can find the most sophisticated model you want. In the end, you have to come up with 8 numbers p1, ... p8, which are probabilities. And then you cannot obtain the quantum correlations. The model doesn't matter. You don't even need a model. You only need 8 numbers. And you can't give them to me, because they don't exist.

If you think you have a model that shows that Bell was wrong, give me the 8 probabilities p1, ... p8 you get out of it and show me how they give rise to the quantum correlations.

You completely missed the point of the critiques of Bell's theorem. You have a "calculator" called Bell's theorem which critics claim calculates wrongly, yet you ask that I prove that the "calculator" is wrong by providing numbers that will always give the "right" answer using the same "wrong calculator"?!
 
  • #40
vanesch said:
I work this out further. We will first make the extra hypothesis that for a single measurement direction, the probability for "up" is the same as for "down". This goes for direction A, B and C.

From this result several equations for the p-values:

p1 + p2 + p3 + p4 = 1/2 ; p5 + p6 + p7 + p8 = 1/2 (A-up and A-down both 50%)

p1 + p2 + p5 + p6 = 1/2 ; p3 + p4 + p7 + p8 = 1/2 (B-up and B-down both 50%)

p1 + p3 + p5 + p7 = 1/2 ; p2 + p4 + p6 + p8 = 1/2 (C-up and C-down both 50%)

We can, from this, deduce that the set of 8 p-values can be reduced to 4 independent degrees of freedom, which we choose to be p1, p2, p3 and p7.

Some algebra leads then to:
p4 = 1/2 - p1 - p2 - p3
p5 = 1/2 - p1 - p3 - p7
p6 = -p2 + p3 + p7
p8 = p1 + p2 - p7.

If we work out the correlations in our independent variables, we find:

C(A,B) = 1 - 4p1 - 4p2
C(B,C) = 4p3 +4p7 - 1
C(A,C) = 1 - 4p1 - 4p3

Now, let's turn to quantum mechanics. For a perfect Bell state |up> |down> - |down> |up> (spin-1/2 systems), we can easily deduce that for two analyzer angles th1 and th2, the correlation is given by:

Cqm(th1,th2) = 2 sin^2({th1-th2}/2) - 1

This verifies easily for perfect parallel (th1 = th2, th=0) analyzers: Cqm(th1,th1) = -1, perfect anticorrelation, and for perfect anti-parallel (th1 - th2 = 180 degrees) analyzers: Cqm = +1, perfect correlation (if one has spin up, the other has spin up for sure too).

For 90 degrees, we find 0 correlation: th1 - th2 = 90 degrees -> Cqm = 0.


If we put now the 3 angles: A: th1 = 0 degrees, B: th1 = 45 degrees, C: th1 = 90 degrees as our 3 angular directions, then we find the following quantum predictions:
Cqm(A,B) = Cqm(B,C) = -1/sqrt(2)
Cqm(A,C) = 0

Now, let us see if we can find numbers p1, p2, p3 and p7 that can satisfy these 3 expressions:
-1/sqrt(2) = 1 - 4p1 - 4p2
-1/sqrt(2) = 4p3 +4p7 - 1
0 = 1 - 4p1 - 4p3

We have 4 degrees of freedom, and 3 equations, so we can write now p2, p3 and p7 as a function of p1:

p2 = 1/8 (2 + sqrt(2) - 8p1)
p3 = 1/4 (1 - 4 p1)
p7 = 1/8 (8 p1 - sqrt(2))

from which it follows that:

p4 = p1 - 1/(4 sqrt(2))
p5 = 1/8 (2 + sqrt(2) - 8 p1)
p6 = p1 - 1/(2 sqrt(2))
p8 = 1/4(1+sqrt(2) - 4 p1)

From p3 follows that p1 < 1/4.

From p6 follows that p1 > 1/(2 sqrt(2))

But 1/(2 sqrt(2)) > 1/4, so we can never find a p1 that satisfies both inequalities to hvae both p3 and p6 a positive number.

QED.

Did you take a look at this article:
http://www.pnas.org/cgi/content/full/101/7/1799

Your proof still suffers from the problems of simultaneous measurablility of incompatible experiments.
 
  • #41
mn4j said:
You completely missed the point of the critiques of Bell's theorem. You have a "calculator" called Bell's theorem which critics claim calculates wrongly, yet you ask that I prove that the "calculator" is wrong by providing numbers that will always give the "right" answer using the same "wrong calculator"?!

You were introducing a sophisticated model that was going to violate the proof of Bell's theorem in a more general context. In this simple context, the "model" reduces to 8 numbers, and they are the simplest form of Bell's theorem - of course in much less generality. If you claim to have a violation of ALL of Bell's theorem, then you must also find a way to violate this claim.

Now, you shift the argument, not to a technicality in the general proof of Bell's theorem, but to the critique that one uses values for simultaneously unmeasurable quantities. Now, that critique is of course rather ill-posed, because that's the very content of Bell's theorem: that one cannot give a priori values to simultaneously unmeasurable quantities! It's its very message.

If one limits oneself to simultaneously measurable quantities, then OF COURSE we get out a standard probability distribution. That's no surprise. No theorem is going to go against this. But you seem to have missed entirely the scope (and limitations) of what Bell's theorem tells us.

It tells us that it is not possible to have any pre-determined values for the outcomes of all possible (a priori yet not decided) measurements that will be done on the system that will generate the outcomes of quantum predictions. We have of course to pick one to actually measure, and then of course the other outcomes will not be compatible. But as we could have taken any of the 3 possible measurements, the outcomes have to be pre-specified if they are going to be responsible for the correlations. That's the idea. So telling me that I use probabilities for incompatible measurements is no surprise, it is the essence of the setup. The idea is that hidden variables pre-determine ALL potential results, but of course I can only pick one of them. Now, it doesn't matter what mechanism is used internally to keep this information, in no matter what algebraic structure, and no matter what mechanism is responsible for generating the outcome when the measurement device has been set up in a certain direction. The only thing that matters is that this outcome is fixed and well-determined for all potential outcomes, and it has to, because we are free to make the choice.

This comes from the basic assumption that correlations can only occur if there is a common cause. This is the basic tenet which is taken in the Bell argument: correlations between measurements MUST HAVE a common cause. If there is no common cause, then measurements are statistically independent. This doesn't need to be so, of course. Correlations "could happen". But we're not used to that.

If you flip a switch, and each time you flip it, the light in the room goes on or goes out, you might be tempted to think that there is some causal mechanism between the switch and the light. You would find it strange to find a switch you can flip, with the light that goes on and off with it, and nevertheless no causal link somehow between the two events.
So we came to think of any statistical correlation as being the result of a causal link (directly, in that one event influences the other, or indirectly, in that there is a common cause). For instance, when looking at the color of my left sock, it is usually strongly correlated with the color of my right sock. That doesn't mean that my left sock's color is causally determining the color of my right sock, it simply means that there was a common cause: this morning I took a pair of socks with identical color.

Take two dice. You can throw them "independently". Somehow we assume that the outcomes will be statistically independent. But it is of course entirely possible to find a Kolmogorov distribution of dice throws that makes that they are perfectly correlated: each time dice 1 gives result A, dice 2 gives result 7 - A. If I send dice 1 to Japan, and dice 2 to South Africa, and people throw dices, and later they come back and compare the lists of their throws, then they would maybe be highly surprised to find that they are perfectly anti-correlated. Nevertheless, there's no problem in setting up a statistical description of both dice that does this. So one wonders: is there some magical causal link between them, so that when you throw dice 1, you find an outcome, and that, through hyperwobble waves, influences the mechanics of dice 2 when you throw it ?
Or, is there an a priori programmed list of outcomes in both dices, which makes them just give out these numbers ? This last possibility is what Bell inquires.

So Bell's theorem tries to inquire up to what point there can be a common cause to the correlations of the quantum measurements on a Bell pair, starting from the assumption that every correlation must have a causal origin. As direct causal link is excluded (lightlike distance between the measurement events), only indirect causal link (common cause) can be the case. So Bell inquires up to what point, IF ALL OUTCOMES ARE PRE-DETERMINED (even though they cannot be simultaneously observed), the quantum correlations can follow from a common cause, assuming that the choice of the settings of the analysers is "free" (is not part of the same causal system).

That's all. As such, the critique that Bell's theorem assigns probabilities to simultaneously unobservable outcomes is ill-posed, because it is exactly the point it is going to analyse: is it POSSIBLE to pre-assign values to each of the (incompatible) measurement outcomes and reproduce the quantum correlations ? Answer: no, this is not possible. That's what Bell's theorem says. No more, no less.

But it is impressive enough. It means that under the assumptions that correlations must always occur by causal (direct or indirect) link and the assumption of "free" choice of the settings of the analyser, the quantum correlations cannot be produced by pre-assigning values to all possible outcomes. It would have been the most straightforward, classically-looking hidden variable implementation that one could obtain to mimick quantum theory, and it is not going to work.

But of course, you can reject one of the several hypotheses in Bell. You can reject the fact that without a direct or indirect causal link, correlations cannot happen. Indeed, "events can happen". You can think of the universe as a big bag of events, which can be correlated in just any way, without necessarily any causal link.
Or you can take the hypothesis that the settings of the analysers are no free choice actually, but are just as well determined by the source of the particles as the outcomes. That's called "superdeterminism" and points out that in a fully deterministic universe, it is not possible to make "statistically independent free choices" in the settings of instruments, as these will be determined by earlier conditions which can very well be correlated with the source.

Nevertheless, in both cases, we're left with a strange universe, because it wouldn't allow us in principle to make any inference from any correlation - which is nevertheless the basis of all scientific work. Think of double-blind medical tests. If statistical correlations are found between the patients who took the new drug, and their health improvement, then we take it that there must be a causal link somehow. It is not because we can find an entirely satisfying Kolmogorov distribution, and that it "just happened" that the people that took the new drug were correlated with a less severe illness, that we say that there is no causal effect. If we cannot conclude that a correlation is directly or indirectly causally linked, we are in a bad shape to do science. But that's nevertheless what we have to assume to reject Bell's conclusions. If we assume that correlation means: causal influence, then Bell's assumptions are satisfied. And then he shows us that we cannot find any such causal mechanism that can explain the quantum correlations.

Another way to get around Bell is to assume that there CAN be a direct causal link of the measurement at observer 1 to the measurement of observer 2. That's what Bohmian mechanics assumes.

Finally, a way to get around Bell is to assume that BOTH outcomes actually happened, and that the correlations only "happen" when we bring together the two results. That's the MWI view on things.

But Bell limits oneself to showing that there cannot be a list of pre-established outcomes for all potential measurements which generates the quantum correlations. As such, it is perfectly normal, in its proof, that one makes a list of pre-established outcomes, and assigns a probability to them.
 
  • #42
mn4j said:
Did you take a look at this article:
http://www.pnas.org/cgi/content/full/101/7/1799

Your proof still suffers from the problems of simultaneous measurablility of incompatible experiments.

Mmmm, I read a bit this article - not everything, I admit. But what seems rather strange in the left column on p 1801, is that we allow apparently the outcomes to depend on pre-correlated lists that are present in both "computers", together with the choices. But if you do that, you do not even need any source anymore: they can produce, starting from that list, any correlation you want! In other words, if the outcome at a certain moment is both a function of the time of measurement, and a pre-established list of common data, and the settings, then I could program both lists in such a way as to reproduce, for instance, EPR correlations I had previously calculated on a single Monte Carlo simulator. I do not even need a particle source anymore, the instruments can spit out their results without any particle impact. The common cause of the correlation is now to be found in the common list they share.

What this in fact proposes, is what's called superdeterminism. It is a known "loophole" in Bell's theorem: if both measurement systems have a pre-established correlation that will influence the outcomes in a specific way, it is entirely possible to reproduce any correlation you want. But it kills all kind of scientific inquiry then, because any observed correlation at any moment can always be pre-established by a "common list" in the different measurement systems.
 
  • #43
vanesch said:
Nevertheless, in both cases, we're left with a strange universe, because it wouldn't allow us in principle to make any inference from any correlation - which is nevertheless the basis of all scientific work. Think of double-blind medical tests. If statistical correlations are found between the patients who took the new drug, and their health improvement, then we take it that there must be a causal link somehow. It is not because we can find an entirely satisfying Kolmogorov distribution, and that it "just happened" that the people that took the new drug were correlated with a less severe illness, that we say that there is no causal effect. If we cannot conclude that a correlation is directly or indirectly causally linked, we are in a bad shape to do science.

I think you are wrong here. Superdeterminism doesn’t mean that there are no causal links, on the contrary. Nothing “just happens” in a superdeterministic universe, everything has a cause. The only difference between superdeterminism and classical determinism is the denial of “freedom”, that is, the denial of non-causal events or human decisions.

Let’s discuss your above example with the double-blind medical test. If superdeterminism is true then it means that it is wrong to assume that every patient has the same chance to take the medicine. Some are predetermined to take it, others not to take it. Nevertheless, we still can conclude that those taking the medicine feel better because of the medicine, and the others feel no improvement because of lack of treatment.

In case of an EPR experiment, superdeterminism denies that the spin of the particles produced at the source and the measurement settings are statistically independent variables. But this doesn’t mean that the measurement results are not causally related to the preexisting particle spin and measurement axis. What it means is that for a certain measurement setting the universe allows only particles with a certain spin to be generated. The reason behind this might be a conservation principle of some sort, whatever. In no case is one forced to conclude that the correlations “just happen” and abandon science.

In classical determinism, the experimental setup (usually a result of a human decision) is postulated as the primary cause of future events. Superdeterminism does not accept this (obviously false) limitation. Otherwise, no intermediary cause is denied by superdeterminism. Everything that is true in classical determinism is also true in superdeterminism.
 
  • #44
ueit said:
Let’s discuss your above example with the double-blind medical test. If superdeterminism is true then it means that it is wrong to assume that every patient has the same chance to take the medicine. Some are predetermined to take it, others not to take it. Nevertheless, we still can conclude that those taking the medicine feel better because of the medicine, and the others feel no improvement because of lack of treatment.

You didn't get what I was trying to say. There can of course be a causal effect in super determinism. The point is that you cannot infer it anymore. The way one infers causal effects is by observation of correlations, when the "cause" is "independently and randomly" selected. If I "randomly" push a button, and I find a correlation between "pushed button" and "light goes on", then I can conclude normally, that the pushed button is the cause of the light going on.

Same with double blind tests. If I "randomly and freely" choose which patients get the drug, and I see a correlation between "taken the drug" and "got better", I can normally infer that "getting better" is the result of the cause "took the drug".

But in superdeterminism, one cannot say anymore that "I freely pushed the button". It could be that I "just happened to push the button" each time the light went on, by previous common cause. So I cannot conclude anymore that there is a causal effect "pushing the button" -> "light goes on". And as such, I cannot deduce anything anymore about closed electrical circuits or anything. There is a causal link, but it could lie in the past, and it is what made at the same time me push the button, and put on the light.

In the same way, I can only conclude from my double blind medical test that there was a common cause that made me "select randomly patient 25 to get the drug" and that made patient 25 get better. It doesn't need to mean that it was the drug that made patient 25 get better. It was somehow a common cause in the past that was both responsible for me picking out patient 25 and for patient 25 to get better.
 
  • #45
mn4j said:
Clearly you need a paradigm shift to be able to see what I'm talking about. And when you do see it, you will understand why most of the things you are posting here make no sense at all. Let me try to illustrate to you the difference between ontological statements and epistemological statements.

Imagine I call you up on the phone and tell you I have a coin in my hand and I'm going to toss it. Then I toss it. You actually hear as the coin drops and settles to a stop. Then I ask you, what is the outcome, heads or tails? What will you say. The correct answer will be to say you don't know, which is exactly the same thing but more precise to say that there is a 0.5 probability that the outcome is heads and 0.5 probability that the outcome is tails.

If you say it is "both heads and tails", or "neither heads no tails", I would think you are just being stupid because I look down and see clearly the state of the coin. This clearly tells you that there is a difference between epistemological statements and ontological ones. For the person who has observed the outcome, their observation is an ontological statement. For the person who is yet to observe the outcome, their statement is epistemological.

- epistemological: the probability of the outcome being a "head" is 0.5
- ontological: the outcome IS a "head", or the probability of the outcome being a "head" is 1.0

As you see, the two statements appear to contradict each other but they are both correct in their contexts. It would be wrong for a person who has not observed the outcome and thus is making an epistemological statement, to suggest without any extra information that the probability for "head" is 1.0, even though ontologically that is the correct answer. Therefore, it is nonsensical to interpret a statement that was made epistemologically in an ontological manner.

Every time somebody says the "the coin IS in a superposition of head and tails" that is what they are doing. It makes absolutely no difference whether you are talking about macroscopic objects, or photons and electrons. Every time a person uses wavefunction collapse as a real physical process happening at observation, that is, with the unstated assumption that something is actually happening to the coin or photon when it is observed, they commit the same error. Everytime somebody says there are two universes such that in one the coin is heads and the other is tails, commits this error.

This is so fundamental, I dare say your future as a scientist (as opposed to a phenomenologist) hangs on you understanding this difference.
Ive been busy and havnt been keeping up on this post, but I have to respond to this. You are completely wrong here. The classical example of a coin flipping where you can't see it is fundamentally different than the quantum example of superposition. Saying a particle goes through both slits IS an ontological statement, since it clearly does go through both slits (proven by the interference pattern).

Like I said before, what your promoting is a hidden variables theory. These make no sense however. If a particle doesn't go through both slits at the same time, what explains why the particle NEVER hits certain areas of the target. The only logical explanation for this experimental result is that the particle goes through both slits at the same time. The particle doesn't have a 50% chance of going through one slit and a 50% chance of going through the other. It goes through both slits and is in a superposition between going through 1 slit and going through the other.

I may not know a lot about entanglement (and saying entanglement = classical entanglement makes some sense to me), I do know that superposition is not equal to hidden properties.
 
  • #46
mn4j said:
Probability not statistics.

ok seriously, your starting to piss me off. You are completely wrong here. Quantum Computers DEPEND on quantum effects (superposition and entanglement). If the theory you keep pushing were true, quantum computers would not work. However they do.

Im hoping you will actually read this thought experiment. The thought experiment takes place in a world A where quantum mechanics is true, and superposition and entanglement are not classical at all. World B is a place where your theory is true. Take a particle in World A, in a superposition between two states. Now take a particle in World B, that has a 50% chance of being in each of two states. In order to make these particles equivalent, you would just need to decohere the particle in world A. This would involve sending a photon at it, and not measuring the result. Doing this in World A would decohere the particle. Doing this in World B however would leave the particle the same as it was before.

Now look at quantum computers. They are so sensitive to decoherence, if you were to use a non-reversible logic gate in one, no quantum algorithms would work. This is because non-reversible logic gates give off information in the form of heat, effectively decohering qubits passed through it. Now think about this. In World A, the quantum computer only works with reversible gates. In World B, the quantum computer SHOULD work with any type of gates (since decoherence doesn't change the state). Our world is World A however, since quantum computers in our world are sensitive to decoherence. Therefore your theory is wrong.

Please start a new thread for this, you have completely taken over my thread and I still havnt gotten an answer to my question.
 
  • #47
o and 1 more thing. Your going on about how this "theory" is fact. Its not however. I started to read that paper, and the only thing it proves is that bell's thereom isn't completely right. It proves that complex hidden variable theories could be possible. It doesn't prove that they are true tho. The fact is, Quantum Mechanics is still backed by experiments, and it hasn't been proven wrong. Superposition and Entanglement are still possible. Since any hidden variable theory would be identical to quantum mechanics, there's really no way to tell which is right.

Another thing, its not like you even have a specific theory. All your going on is that there exists some abstract hidden variable theory that obeys special relativity and explains quantum mechanics. A requirement of this theory is that it is significantly complex (it would need to explain double slit interference patterns, which isn't easy to do with hidden variables). This theory would probably be much more complex than quantum mechanics (although QM is weird, it is simple). Occham's Razor tells us to go with QM. That along with the fact that QM came first and this theory would make no new predictions, means there is nothing to be gained by switching to some theory that hasnt even been thought of yet! My question was obviously a question about quantum mechanics, where entanglement and superposition are not classical, and I don't understand why you posted about this here.
 
Last edited:
  • #48
Michael879:
The only logical explanation for this experimental result is that the particle goes through both slits at the same time
Not true. See de Broglie-Bohm theory

You're so right about mn4j who should not have been allowed to air his epistimologies in your thread.
 
  • #49
yea that's was a pretty long rant, I am not suprised I screwed something up. Anyway, I do understand what he's saying now. Assuming the paper he posted is true, the "weird" interpretation of QM could possibly be replaced by some hidden variable interpretation. However both are equally valid and this thread was clearly a question about the "weird" interpretation. Anyone got an answer for my original question? Does observation = entanglement? It makes a lot of sense to me. I've always thought it was weird that you can manipulate particles in a superposition without decohering them (including entangling other particles with them). This would fix that weirdness though, since ANY observation would entangle the observer with the particle (and if a particle "observes" another particle this would explain why the original one doesn't decohere).
 
  • #50
michael879 said:
Anyone got an answer for my original question? Does observation = entanglement? It makes a lot of sense to me. I've always thought it was weird that you can manipulate particles in a superposition without decohering them (including entangling other particles with them).

I thought I adressed that in post #35...

If you insist upon a many-worlds view (meaning, if you insist upon quantum dynamics also describing the measurement interactions), then measurement = entanglement with the instrument and the environment. If you use projection, then obviously, measurement destroys entanglement because you pick out one term which is a product state.

But the observable effect of both are identical.

If systems A and B are entangled, then you cannot observe an interference effect of system A alone ; interference effects are now only possible as correlations between A and B. If systems A, B and C are entangled, then no interference effects show up anymore between A and B, but interference effects show up between A B and C.

So you see that if you entangle many many systems, that you will never observe interference effects anymore in low order correlations. For all practical purposes, the correlated systems behave as if they were just subject to a statistical mixture, as long as one limits oneself to low order correlations between a relatively low number of different outcomes - which is practically always the case.
 
  • #51
vanesch said:
The way one infers causal effects is by observation of correlations, when the "cause" is "independently and randomly" selected.

I disagree. I think we can safely infer that the cause of a supernova explosion is the increase of the star's mass beyond a certain limit, without "randomly" selecting a star, bringing it inside the lab and adding mass to it.

If I "randomly" push a button, and I find a correlation between "pushed button" and "light goes on", then I can conclude normally, that the pushed button is the cause of the light going on.

I fail to see why the push need to be random. The correlation is the same.

But in superdeterminism, one cannot say anymore that "I freely pushed the button".

True.

It could be that I "just happened to push the button" each time the light went on, by previous common cause.

This is self-contradictory. You either "just happen" to push the button at the right time, either the two events (pushing the button and the light going on) are causally related.

The first case is a type of "conspiracy" which has nothing to do with superdeterminism. In a probabilistic universe one can also claim that it just happens that the two events are correlated. There is no reason to assume that a "typical" superdeterministic universe will show correlations between events in the absence of a causal law enforcing those correlations.

In the second case, I see no problem. Yeah, it may be that the causal chain is more complicated than previously thought. Nevertheless, the two events are causally related and one can use the observed correlation to advance science.

So I cannot conclude anymore that there is a causal effect "pushing the button" -> "light goes on". And as such, I cannot deduce anything anymore about closed electrical circuits or anything. There is a causal link, but it could lie in the past, and it is what made at the same time me push the button, and put on the light.

In the same way, I can only conclude from my double blind medical test that there was a common cause that made me "select randomly patient 25 to get the drug" and that made patient 25 get better. It doesn't need to mean that it was the drug that made patient 25 get better. It was somehow a common cause in the past that was both responsible for me picking out patient 25 and for patient 25 to get better.

I understand your point but I disagree. There is no reason to postulate an ancient cause for the patient's response to the medicine. In the case of EPR there is a very good reason to do that and this reason is the recovery of common-sense and logic in physics.
 
  • #52
ueit said:
I understand your point but I disagree. There is no reason to postulate an ancient cause for the patient's response to the medicine. In the case of EPR there is a very good reason to do that and this reason is the recovery of common-sense and logic in physics.
Ah, I see. Your argument hangs on the idea that, although the EPR apparatus and the drug trial are conceptually equivalent examples, you think that invoking super-determinism makes the theory nicer in the case of EPR (mitigating Bell type theorems) and uglier in the case of drug trials (challenging traditional science)? My problem with this is that I think it is much nicer to explain everything uniformly, and ugly to have to make retrospective ad hoc decisions about which of the experimenters decisions were made independently (like choosing who to give the placebo, versus which of those decisions were actually predetermined in some complex manner, like choosing an axis on which to measure spin).
 
Last edited:
  • #53
ueit said:
I disagree. I think we can safely infer that the cause of a supernova explosion is the increase of the star's mass beyond a certain limit, without "randomly" selecting a star, bringing it inside the lab and adding mass to it

No, that's a deduction based upon theory which is itself based upon many many observations. In the same way I don't have to push the button to see the light go on: if I know that there is a charged battery, a switch and wires that I've checked, are well-connected, I'm pretty sure the light will go on when I push the switch without actually doing so.

But before arriving at that point, I (or our ancesters) had to do a lot of observations and inference of causal effects - some erroneous deductions still linger around in things like astrology. And it is this kind of promordial cause-effect relation that can only be established by "freely and randomly" selecting the cause, and by observing a correlation with the effect.

I fail to see why the push need to be random. The correlation is the same.

Imagine a light that flips on and off every second. Now if I push a button on and off every second, there will be a strong correlation between my pushing a button and the light going on, but I cannot conclude that there's a causal link. If you'd see me do, you'd ask "yes, but can you also STOP a bit the pushing, to see if the light follows that too". It is this "element of randomly choosen free will" which allows me to turn the observation of a correlation into an argument for a causal link.

This is self-contradictory. You either "just happen" to push the button at the right time, either the two events (pushing the button and the light going on) are causally related.

That's my point: in superdeterminism, we *think* we are "randomly" pushing the button, but there is a strong causal link (from the past) making us do so at exactly the right moment. So it is absolutely not "random" or "free" but we think so.


The first case is a type of "conspiracy" which has nothing to do with superdeterminism. In a probabilistic universe one can also claim that it just happens that the two events are correlated. There is no reason to assume that a "typical" superdeterministic universe will show correlations between events in the absence of a causal law enforcing those correlations.

I must have expressed myself badly: as you say, in a superdeterministic universe, there is of course an obscure common cause in the past which makes me push the button at exactly the time when it also causes the light to light up. Only, I *think* that I was randomly picking my pushing of the button, and so this *appears* as a conspiracy to me.

In a stochastic universe, it doesn't need to be true that "non-causal" (whatever that means in a stochastic universe!) events are statistically independent, but in that case we can indeed talk about a conspiracy.

Observationally however, both appear identical: we seem to observe correlations between randomly (or erroneously supposed randomly) chosen "cause events" and "effect events", so we are tempted to conclude a direct causal link, which isn't there: in the superdeterministic universe, there is simply a common cause from the past, and in a stochastic universe there is a "conspiracy".

In the second case, I see no problem. Yeah, it may be that the causal chain is more complicated than previously thought. Nevertheless, the two events are causally related and one can use the observed correlation to advance science.

No, one can't because the causal link is not direct (there's no "cause" and "effect", we are having two "effects" of a common cause in the past). This is like the joke of the Rolex watches and the expensive cars: You observe people with Rolex watches, and you find out that they are strongly correlated with the people who have expensive cars, so you're looking now into a mechanism by which "putting on a Rolex" makes you drive an expensive car. Of course this is because there's a common cause in the past: these people are rich! And (cause) being rich has as an effect 1 "wearing a Rolex" and effect 2 "driving an expensive car". (I'm simplifying social issues here :smile: )

But we're now in the following situation: you pick out people in the street "randomly", you put them on a Rolex watch on their wrist, and then you see that they drive an expensive car! So this would, in a "normal" universe, make you think that putting on a Rolex watch DOES make you drive instantaneously an expensive car.
In a superdeterministic universe, this came about because an obscure cause in the past made that people who were rich were going to be selected by you - even though you thought you picked them "randomly". So there's no causal effect from putting on a Rolex watch to driving an expensive car. But you would infer it because of your experiment.

I understand your point but I disagree. There is no reason to postulate an ancient cause for the patient's response to the medicine. In the case of EPR there is a very good reason to do that and this reason is the recovery of common-sense and logic in physics.

Well, the medicine might be like the Rolex watch, and the patient's response might be the expensive car.
 
  • #54
vanesch said:
Mmmm, I read a bit this article - not everything, I admit. But what seems rather strange in the left column on p 1801, is that we allow apparently the outcomes to depend on pre-correlated lists that are present in both "computers", together with the choices. But if you do that, you do not even need any source anymore: they can produce, starting from that list, any correlation you want! In other words, if the outcome at a certain moment is both a function of the time of measurement, and a pre-established list of common data, and the settings, then I could program both lists in such a way as to reproduce, for instance, EPR correlations I had previously calculated on a single Monte Carlo simulator. I do not even need a particle source anymore, the instruments can spit out their results without any particle impact. The common cause of the correlation is now to be found in the common list they share.

What this in fact proposes, is what's called superdeterminism. It is a known "loophole" in Bell's theorem: if both measurement systems have a pre-established correlation that will influence the outcomes in a specific way, it is entirely possible to reproduce any correlation you want. But it kills all kind of scientific inquiry then, because any observed correlation at any moment can always be pre-established by a "common list" in the different measurement systems.

I notice from your posts elsewhere that you still claim non-refutation of bell's theorem. Here are two papers clearly explaining some of the problems I mentioned about Bell's theorem including your purported proof above:

A Refutation of Bell's Theorem
Guillaume Adenier
http://arxiv.org/abs/quant-ph/0006014
Foundations of Probability and Physics XIII (2001)

Interpretations of quantum mechanics, and interpretations of violation of Bell's inequality
Willem M. de Muynck
http://arxiv.org/abs/quant-ph/0102066v1
Foundations of Probability and Physics XIII (2001)

These articles are well worth the read for anyone interested in this matter.

To summarize the first one, proofs of bell's theorem are not accurate mathematical models of the experiments which they purport to model. Thus contradiction between bell's theorem and experimental results is expected and does not contradict any of the premises of bell's theorem. Whereas in proofs of bells theorem, the expectation values are calculated for what would have happened if a single photon pair with the same set of local hidden variables was measured multiple times, in real experiments, a different photon pair with a different set of local hidden variables is measured each time. Thus comparing the experimental results with bell's inequality is comparing apples and oranges.

The second article shows that bell's inequality could be derived without assuming locality, and then goes on to show that although non-locality can be a reason for violation of bell's inequality, there are other more plausible local reasons for violation of bell's inequality.
 
  • #55
mn4j said:
I notice from your posts elsewhere that you still claim non-refutation of bell's theorem. Here are two papers clearly explaining some of the problems I mentioned about Bell's theorem

These are not refutations of Bell's theorem, but refutations of misunderstandings of Bell's theorem.

from p6 of the second article:
From the experimental violation of Bell’s inequality it follows that an
objectivistic-realist interpretation of the quantum mechanical formalism, encompassing
the ‘possessed values’ principle, is impossible. Violation of Bell’s
inequality entails failure of the ‘possessed values’ principle (no quadruples available).

This is what Bell claims: that there cannot be pre-determined outcomes pre-programmed in the two particles for all directions, that generate the correlations found by quantum theory. That's all. And that's not refuted.

Many people see in Bell a kind of proof of non-locality, which is wrong. It becomes a proof of non-locality when additional assumptions are made.

In MWI, for instance, Bell is explained in a totally local way.

But this is not what Bell's theorem is about. Bell's theorem proves that there cannot be a list of pre-programmed outcomes for all possible measurement results in both particles which give rise to the quantum correlations. Period.

And that's not refuted.
 
  • #56
mn4j said:
A Refutation of Bell's Theorem
Guillaume Adenier
http://arxiv.org/abs/quant-ph/0006014
Foundations of Probability and Physics XIII (2001)

This paper insists on a well-known criticism of Bell's theorem (rediscovered many times), namely the fact that one cannot perform the correlation measurements that enter into the Bell expressions by doing them on THE SAME SET of pairs of particles: one measures one correlation value on set 1, one measures the second correlation on set 2, etc...
And then it is argued that the inequality was derived from a single set of data, while the measurements are derived from 4 different sets.

But this is erroneous,for two reasons. The first reason is that the inequality is not derived from a single set of data, but FROM A PROBABILITY DISTRIBUTION. If the 4 sets are assumed to be 4 fair samples of that same probability distribution, then there is nothing wrong in establishing 4 expectation values on the 4 different fair samples. This is based upon the hypothesis of fair sampling, which is ALWAYS a necessary hypothesis in all of science. Without that hypothesis, nothing of any generality could ever be deduced. We come back to our double blind test in medecine. If a double-blind test indicates that a medecine is efficient for 80% of the cases, then I ASSUME that this will be its efficiency ON ANOTHER FAIR SAMPLE too. If the fact of having now a different fair sample puts in doubt these 80%, then the double blind test was entirely useless.

But the second reason is that for one single sample, you can never violate any Bell inequality, by mathematical requirement. Within a single sample, all kinds of correlations AUTOMATICALLY follow a Kolmogorov distribution, and will always satisfy all kinds of Bell equalities. It is mathematically impossible to violate a Bell inequality by working with a single sample, and by counting items in that sample. This is what our good man establishes in equation (35). As I said, this has been "discovered" several times by local realists.

But let's go back to equation (34). If N is very large, nobody will deny that each of the 4 terms will converge individually to its expectation value within a statistical error. If we cannot assume that the average of a large number of random variables from the same distribution will somehow converge to its expectation value, then ALL OF STATISTICS falls on its butt, and with it, most of science which is based upon statistical expectation values (including our medical tests). And when you do so, you can replace the individual terms by their expectation values, and we're back to square one.

So the whole argument is based upon the fact that when making the average of a huge number of samples of a random variable, this doesn't converge to its expectation value...
 
Last edited:
  • #57
vanesch said:
But this is erroneous,for two reasons. The first reason is that the inequality is not derived from a single set of data, but FROM A PROBABILITY DISTRIBUTION. If the 4 sets are assumed to be 4 fair samples of that same probability distribution, then there is nothing wrong in establishing 4 expectation values on the 4 different fair samples.
If you read this article carefully, you will notice that assuming 4 different fair samples WITH DIFFERENT HIDDEN VARIABLES, you end up with a different inequality, which is never violated by any experiment or by quantum mechanics.

This is based upon the hypothesis of fair sampling, which is ALWAYS a necessary hypothesis in all of science.
A sample in which the parameter being estimated is assumed to be the same is in fact a fair sample. But this is not the kind of fair sample we are interested in here. Using the example of a source of waves, the hidden variables being (amplitude, phase, frequency), the kind of fair sample you are talking about is one in which all the waves produced have exactly the same VALUES for those variables. However, the sample we are interested in for the Bell's inequality, does not have to have the same values. The only important requirement is that those variables be present. You can therefore not draw inferences about this extended sample space by using your very restricted sample space.

Without that hypothesis, nothing of any generality could ever be deduced. We come back to our double blind test in medecine. If a double-blind test indicates that a medecine is efficient for 80% of the cases, then I ASSUME that this will be its efficiency ON ANOTHER FAIR SAMPLE too. If the fact of having now a different fair sample puts in doubt these 80%, then the double blind test was entirely useless.
What you have done is to determine the 80% by testing the same individual 100 times and observing that the medicine is effective 80 times, and then after measuring 100 different people, and finding 50% you are making inference by comparing the 80% (apples) with the 50% (oranges).

Try to repeat your proof of bell's theorem considering that each sample measured has it's own hidden variable VALUE. You can not reasonably assume that all samples have exactly the same hidden variable values (which is your definition of fair sampling) because nobody has ever done any experiment in which they made sure the hidden variables had exactly the same values when measured. So again, the criticism is valid and the proof is not an accurate model of any of the performed Aspect-type experiments.

But the second reason is that for one single sample, you can never violate any Bell inequality, by mathematical requirement.
This is unproven. Nobody has ever done an Aspect-type experiment in which they measure the same photon multiple times, which is a necessary precondition to be able to verify any Bell inequality. I will wager that if such an experiment were ever done (if at all it is possible), Bell's inequality will not be violated.
Within a single sample, all kinds of correlations AUTOMATICALLY follow a Kolmogorov distribution, and will always satisfy all kinds of Bell equalities. It is mathematically impossible to violate a Bell inequality by working with a single sample, and by counting items in that sample. This is what our good man establishes in equation (35). As I said, this has been "discovered" several times by local realists.
What he shows leading up to (35) is that for a single sample, even quantum mechanics does not predict the violation of bell's inequality and therefore Bell's theorem can not be established within the weakly objective interpretation. In other words, bell's inequality is based squarely on measuring the same sample multiple times.
But let's go back to equation (34). If N is very large, nobody will deny that each of the 4 terms will converge individually to its expectation value within a statistical error.
This is false, there can be no factorization of that equation because the terms are different. Even if N is large. There is therefore no basis for this conclusion from that equation. You can not escape the conclusion that S <= 4, by saying as N becomes large S will <= 2sqrt(2)
If we cannot assume that the average of a large number of random variables from the same distribution will somehow converge to its expectation value, then ALL OF STATISTICS falls on its butt
Not true. There is no such thing as "its expectation value", when dealing with a few hidden variables with a large number of random values. Let's take a source that produces a wave with the random values of hidden variables (amplitude, phase, frequency). If this is how statistics is done, very soon people will start claiming that the "expectation value" of the amplitude as N becomes very large is zero. But if it were possible to measure the exact same wave N times, you will definitely get a different result. The latter IS the expectation value, the former is NOT.
 
Last edited:
  • #58
mn4j said:
If you read this article carefully, you will notice that assuming 4 different fair samples WITH DIFFERENT HIDDEN VARIABLES, you end up with a different inequality, which is never violated by any experiment or by quantum mechanics.

Look, the equation is the following:
[tex]1/N \sum_{i=1}^N \left( R^1_i + S^2_i + T^3_i + U^4_i \right) [/tex]

And then the author concludes that for a single value of i, one has no specific limiting value of the expression [tex]R^1_i + S^2_i + T^3_i + U^4_i [/tex]. But that's not the issue. The issue is that if we apply the sum, that our expression becomes:
[tex] 1/N \sum_{i=1}^N R^1_i + 1/N \sum_{i=1}^N S^2_i + 1/N \sum_{i=1}^N T^3_i + 1/N \sum_{i=1}^N U^4_i [/tex]

And, assuming that each set samples, 1, 2, 3 and 4 is fairly drawn from the overall distribution of hidden variables, we can conclude that:
[itex]1/N \sum_{i=1}^N T^1_i[/itex] will, for a large value of N, be close to the expectation value < T > over the probability distribution of hidden variables, independently of over which fair sample (1,2,3 or 4) it has been calculated.
As such, our sum is a good approximation (for large N) of:
< R > + < S > + < T > + < U >
 
  • #59
mn4j said:
A sample in which the parameter being estimated is assumed to be the same is in fact a fair sample. But this is not the kind of fair sample we are interested in here. Using the example of a source of waves, the hidden variables being (amplitude, phase, frequency), the kind of fair sample you are talking about is one in which all the waves produced have exactly the same VALUES for those variables.

No, I'm assuming that they are drawn from a certain unknown distribution, but that that distribution doesn't change when I change my measurement settings. In other words, that I get a statistically equivalent set for measurement setting 1 and for measurement setting 2. The reason for that is that I can arbitrarily pick my settings 1, 2 ... and that at the moment of DRAWING the element from the distribution, it has not been determined yet what setting I will use. As such, I assume that the distribution of hidden variables is statistically identical for the samples 1, 2, 3, and 4, and hence that the expectation values are those of identical distributions.

However, the sample we are interested in for the Bell's inequality, does not have to have the same values. The only important requirement is that those variables be present. You can therefore not draw inferences about this extended sample space by using your very restricted sample space.

If you assume them to be statistically identical, yes you can.

What you have done is to determine the 80% by testing the same individual 100 times and observing that the medicine is effective 80 times, and then after measuring 100 different people, and finding 50% you are making inference by comparing the 80% (apples) with the 50% (oranges).

Well, even that would statistically be OK for the average. If I test for the same individual 80% chances, and then I test for another individual 70% chances, and so on... I will find a certain distribution for the "expectation values per person". If I now take a random drawing of 100 persons, and do the measurement only once, I will get a distribution, if my former individuals were fairly sampled which has the same average.

Try to repeat your proof of bell's theorem considering that each sample measured has it's own hidden variable VALUE. You can not reasonably assume that all samples have exactly the same hidden variable values (which is your definition of fair sampling) because nobody has ever done any experiment in which they made sure the hidden variables had exactly the same values when measured.

You don't assume that they have the same values in the same order, but you do assume of course that they are drawn from the same distribution. Hence averages should be the same. This is like having a population in which you pick 1000 people and you measure their weight and height. Next you pick (from the same population) 1000 other people and you measure their weight and height again. Guess what ? You'll find the same correlations twice. Even though their "hidden" variables were "different". Now, imagine that you find a strong correlation between weight and height. Now, you pick again 1000 different people (from the same population), and you measure weight and footsize. Next still 1000 different people, and you measure height and footsize. It's pretty obvious that if you take the correlation of weight with footsize, and it is strong, that you ought to find also a strong correlation between height and footsize.
What you are claiming now (what the paper is claiming) is that, because we've measured these correlation on DIFFERENT SETS of people that this shouldn't be the case, even if when we do this on a single set of 1000 people, we would find this.
 
Last edited:
  • #60
vanesch said:
No, I'm assuming that they are drawn from a certain unknown distribution, but that that distribution doesn't change when I change my measurement settings. In other words, that I get a statistically equivalent set for measurement setting 1 and for measurement setting 2. The reason for that is that I can arbitrarily pick my settings 1, 2 ... and that at the moment of DRAWING the element from the distribution, it has not been determined yet what setting I will use. As such, I assume that the distribution of hidden variables is statistically identical for the samples 1, 2, 3, and 4, and hence that the expectation values are those of identical distributions.

Yes, you are assuming that each time the experiment is performed, the hidden variable values of the photons leaving the source are randomly selected from the same distribution of hidden variable values. How then can you know that you are infact selecting the values in a random manner without actually knowing how the behaviour of the hidden variable. You still do not understand the fact that nobody has ever done this experiment the way you are assuming it. Nobody has ever taken steps to ensure that the distribution of the samples is uniform as you claim, mere repeatition multiple times is not enough as such an experiment system will be easily fooled by a time dependent hidden variable or a source in which the hidden variable value of the second photon pair emitted is related to the hidden variable value of the first photon pair emitted. Thus, the system you model imposes a drastically reduced hidden-variable space, and does not accurately model actuall Aspect-type experiments.

If you assume them to be statistically identical, yes you can.
As I have pointed out already above, this assumption unnecessarily limits the hidden variable space, and has never been enforced in real Aspect type experiments. The critique stands!

Well, even that would statistically be OK for the average. If I test for the same individual 80% chances, and then I test for another individual 70% chances, and so on... I will find a certain distribution for the "expectation values per person". If I now take a random drawing of 100 persons, and do the measurement only once, I will get a distribution, if my former individuals were fairly sampled which has the same average.
But that's not what you are doing. What you are actually doing is deriving an inequality for measuring a single individual 100 times,and using that to compare with actually measuring 100 different individuals. For the analogy to work you must never actually measure a single individual more than one time, since nobody has ever actually done that in any Aspect time experiment.

You don't assume that they have the same values in the same order, but you do assume of course that they are drawn from the same distribution. Hence averages should be the same. This is like having a population in which you pick 1000 people and you measure their weight and height. Next you pick (from the same population) 1000 other people and you measure their weight and height again. Guess what ? You'll find the same correlations twice. Even though their "hidden" variables were "different". Now, imagine that you find a strong correlation between weight and height. Now, you pick again 1000 different people (from the same population), and you measure weight and footsize. Next still 1000 different people, and you measure height and footsize. It's pretty obvious that if you take the correlation of weight with footsize, and it is strong, that you ought to find also a strong correlation between height and footsize.
If you take 1000 persons and measure their height and weight exactly once each, it will tell you absolutely nothing about what you will obtain if you measure a single person 1000 times. If you find a correlation between weight and footsize in the 1000 measurements of the same individual, the ONLY correct inference is that you have a systematic error in your equipment. However if you find a correlation between weight and footsize in the 1000 measurements from different individuals, there are two possible inferences neither of which you can reasonably eliminate without further experimentation:
1- systematic error in equipment
2- Real relatioship between weight and footsize

It would be fallacious to interpret the correlation in the single person/multiple measurement result as meaning there is a real relationship between the weight and footsize.

What you are claiming now (what the paper is claiming) is that, because we've measured these correlation on DIFFERENT SETS of people that this shouldn't be the case, even if when we do this on a single set of 1000 people, we would find this.
No! What the paper is claiming, is the following, in the words of the author:
It was shown that Bell’s Theorem cannot be derived, either within a strongly objective interpretation of the CHSH function, because Quantum Mechanics gives no strongly objective results for the CHSH function, or within a weakly objective interpretation, because the only derivable local realistic inequality is never violated, either by Quantum Mechanics or by experiments.
...
Bell’s Theorem, therefore, is refuted.
 
Last edited:
  • #61
mn4j said:
Yes, you are assuming that each time the experiment is performed, the hidden variable values of the photons leaving the source are randomly selected from the same distribution of hidden variable values. How then can you know that you are infact selecting the values in a random manner without actually knowing how the behaviour of the hidden variable.

This is exactly what you assume when you do "random sampling" of a population. Again, if you think that there are pre-determined correlations between measurement apparatus, or timing or whatever, then you are adopting some kind of super determinism, and you would be running in the kind of problems we've discussed before even with medical tests.


You still do not understand the fact that nobody has ever done this experiment the way you are assuming it. Nobody has ever taken steps to ensure that the distribution of the samples is uniform as you claim, mere repeatition multiple times is not enough as such an experiment system will be easily fooled by a time dependent hidden variable or a source in which the hidden variable value of the second photon pair emitted is related to the hidden variable value of the first photon pair emitted.

You can sample the photons randomly in time. You can even wait half an hour between each pair you want to observe, and throw away all the others. If you still assume that there is any correlation between the selected pairs, then this equivalent to superdeterminism.
That is like saying that there is a dependency between picking the first and the second patient that will get the drug, and between the first and the second patient that will get the placebo.

As I have pointed out already above, this assumption unnecessarily limits the hidden variable space, and has never been enforced in real Aspect type experiments. The critique stands!

You might maybe know that especially in the first Aspect experiments, the difficulty was the inefficiency of the setup, which made the experiment have a very low countrate. As such, the involved pairs of photons where separated by very long time intervals as compared to the lifetime of a photon in the apparatus (we talk here of factors 10^12).
There is really no reason (apart from superdeterminism or conspiracies) to assume that the second pair had anything to do with the first.

If you take 1000 persons and measure their height and weight exactly once each, it will tell you absolutely nothing about what you will obtain if you measure a single person 1000 times. If you find a correlation between weight and footsize in the 1000 measurements of the same individual, the ONLY correct inference is that you have a systematic error in your equipment. However if you find a correlation between weight and footsize in the 1000 measurements from different individuals, there are two possible inferences neither of which you can reasonably eliminate without further experimentation:
1- systematic error in equipment
2- Real relatioship between weight and footsize

Yes, but I was not talking about 1 person measuring 1000 times and 1000 persons measuring 1 time each, I was talking about measuring 1000 persons 1 time each, and then measuring 1000 OTHER persons 1 time each again.

You do realize that the 4 samples in an Aspect type experiment are taken "through one another" do you ?
You do a setting A, and you measure an element of sample 1
you do setting B and you measure an element of sample 2
you do a setting A again, and you measure the second element of sample 1
you do a setting D and you measure an element of sample 4
you do a setting C and you measure an element of sample 3
you do a setting A and you measure the third element of sample 1
you ...

by quickly changing the settings of the polarizers for each measurement.
And now you tell me that the first, third and sixth measurement are all "on the same element" ?
 
  • #62
Vanesh,

I think that your require too much from a scientific theory. You require it to be true in some absolute sense.

In the case of the medical test in a superdeterministic universe, the theory that the medicine cured the patient is perfectly good from a practical stand point as it will always predict the correct result. The fact that, unknown by us, there is a different cause in the past does not render the theory useless. It is wrong, certainly, but probably not worse than all our present scientific theories.

Every physical theory to date, including QM and GR is wrong in an absolute sense but we still are able to make use of them.
 
  • #63
ueit said:
In the case of the medical test in a superdeterministic universe, the theory that the medicine cured the patient is perfectly good from a practical stand point as it will always predict the correct result. The fact that, unknown by us, there is a different cause in the past does not render the theory useless. It is wrong, certainly, but probably not worse than all our present scientific theories.

This is entirely correct, and is an attitude that goes with the "shut up and calculate" approach. Contrary to what you think - and if you read my posts then you should know this - I don't claim at all that our current theories are in any way "absolutely true". I only say that *if* one wants to make an ontology hypothesis (that means, IF one wants to pretend that they are true in some sense) then such and so, knowing that this is only some kind of game to play. But it is *useful* to play that game, for exactly the practical reason you give above.

Even if it is absolutely not true that taking a drug cures you, and that taking a drug only comes down to doing something that was planned long ago, and that the same cause is also making that you will get better, our pharmacists have a (in that case totally wrong) way of thinking how the drug is acting in the body and curing you, and they better stick to their wrong picture which helps them make "good drugs" (in the practical sense), than convince them that they don't understand anything about how drugs work in the human body, which would then render it impossible for them to design new drugs, given that their design procedures are based upon a totally wrong picture of reality.
So if nature "conspires" to make us think that drugs cure people (even if it is just a superdeterministic correlation), then it is practically seen, a good idea to devellop an ontological hypothesis in which people get cured by drugs.

It is in this light that I see MWI too: even if it is absolutely not true in an ontological sense, if nature conspires to make us think that the superposition principle is correct, then it is a good idea to devellop an ontological hypothesis in which this superposition principle is included. Whether this is "really true" or not: you will get a better intuition for quantum theory, in the same way the pharmacist will get a better feeling for the design of drugs based upon his wrong hypothesis that it are the drugs that cure the people.
 
  • #64
vanesch said:
You can sample the photons randomly in time.
This CAN NOT be done, unless you know the time-behavior of the variables. You seem to be assuming that each variable has a single value with a simple normal distribution. What if the value of a variable changes like a cos(kw + at) function over time. If you don't know this before hand, there is no way you can determine by random sampling the exact behavior of the function. If you take "random" samples of this function, you end up with a rather flat distribution, which does not tell you anything about the behavior variable.

vanesch said:
There is really no reason (apart from superdeterminism or conspiracies) to assume that the second pair had anything to do with the first.
On the contrary, the mere fact that they come from the same source gives me more than ample reason, no conspiracy. We are trying to find hidden variables here are we not? Therefore to make an arbitrary assumption without foundation that the emission of the first pair of photons does not change the source characteristics in a way that can affect the second pair is very unreasonable. No matter how long the time is between the emissions. Do you have any scientific reason to believe that hidden variables MUST not have that behavior?

vanesch said:
Yes, but I was not talking about 1 person measuring 1000 times and 1000 persons measuring 1 time each, I was talking about measuring 1000 persons 1 time each, and then measuring 1000 OTHER persons 1 time each again.
Yes, and it does not change the fact that your results will tell you absolutely nothing about what you would obtain by measuring a single person 1000 times.
vanesch said:
You do realize that the 4 samples in an Aspect type experiment are taken "through one another" do you ?
You do a setting A, and you measure an element of sample 1
you do setting B and you measure an element of sample 2
you do a setting A again, and you measure the second element of sample 1
you do a setting D and you measure an element of sample 4
you do a setting C and you measure an element of sample 3
you do a setting A and you measure the third element of sample 1
you ...

by quickly changing the settings of the polarizers for each measurement.
And now you tell me that the first, third and sixth measurement are all "on the same element" ?
No. I'm telling you that the results of this experiment can not and should not be compared with calculations based on measuring a single element multiple times. Your experiment will tell you about ensemble averages, but it will never tell you about the behavior of a single element.
 
Last edited:
  • #65
It may be more helpful to consider thought experiments for which (unitary, no fundamental collapse) quantum mechanics makes different predictions. I think that David Deutsch has given one such example involving an artificially intelligent observer implemented by a quantum computer. I don't remember the details of this thought experiment, though...
 
  • #66
mn4j said:
This CAN NOT be done, unless you know the time-behavior of the variables. You seem to be assuming that each variable has a single value with a simple normal distribution.

I'm assuming that whatever is the time dependence of the variables, that this should not be correlated with the times of the measurement, and there is an easy way to establish that: change the sampling rates, sample at randomly generated times... If the expectation values are always the same, we can reasonably assume that there is no time correlation. Also if I have long times between the different elements of a sample, I can assume that there is no time coherence left.
I make no other assumption of the distribution of the hidden variables, than their stationnarity.

What if the value of a variable changes like a cos(kw + at) function over time. If you don't know this before hand, there is no way you can determine by random sampling the exact behavior of the function.

No, but I can determine the statistical distribution of the samples taken at random times of this function. I can hence assume that if I take random time samples, that I draw them from this distribution.


If you take "random" samples of this function, you end up with a rather flat distribution, which does not tell you anything about the behavior variable.

First of all, it won't be flat, it will be peaked at the sides. But no matter. That is sufficient. If I assume that the variable is "flatly" distributed in this way, that's good enough, because that is what this variable IS, when the sampletimes are incoherently related to the time function.

On the contrary, the mere fact that they come from the same source gives me more than ample reason, no conspiracy. We are trying to find hidden variables here are we not? Therefore to make an arbitrary assumption without foundation that the emission of the first pair of photons does not change the source characteristics in a way that can affect the second pair is very unreasonable.

That is not unreasonable at all, because the "second pair" will be in fact, the trillionth pair or something. In order for your assumption to hold, the first pair should influence EXACTLY THOSE pairs that we are going to decide to measure, maybe half an hour later, when we decided arbitrarily to change the settings of the polarizers exactly to the same settings.

It is then very strange that we never see any variation in the expectation values of any of the samples, no matter if we sample 1 microsecond later, or half an hour later, ... but that this change is EXACTLY what is needed to produce Bell-type correlations. This is nothing else but an assumption of superdeterminism or of conspiracy.

No matter how long the time is between the emissions. Do you have any scientific reason to believe that hidden variables MUST not have that behavior?

Well, as I said, that kind of behaviour is superdeterminism or conspiracy, which is by hypothesis not assumed in Bell's theorem, as he starts out from hidden variables that come from the same distribution for each individual trial, and the reason for that is the assumption (spelled out in the premisses of Bell's theorem) that the "free choice" is really a free, and hence statistically independent choice of the settings, and the assumption that the measurement apparatus is deterministically and in a stationary way giving the outcome as a function of the received hidden variable.

No. I'm telling you that the results of this experiment can not and should not be compared with calculations based on measuring a single element multiple times. Your experiment will tell you about ensemble averages, but it will never tell you about the behavior of a single element.

Sure. But the theorem is about ensemble averages of a stationary distribution. That's exactly what Bell's theorem tells us: that we cannot reproduce the correlations as ensemble averages of a single stationary distribution which deterministically produces all possible outcomes.

Assuming that the distributions are stationary, we are allowed to measure these correlations on different samples (drawn from the same distribution).

As such, the conclusion is that they cannot come from a stationary distribution. That's what's Bell's theorem tells us. Not more, not less.

So telling me that the distributions are NOT stationary, but are CORRELATED with the settings of the measurement apparatus (or equivalent, such as the sample times...), and that the measurements are not deterministic as a function of the elements of the distribution, is nothing else but denying one of the premisses of Bell's theorem. One shouldn't then be surprised to find other outcomes.

Only, if you assume that the choices are FREE and UNCORRELATED, you cannot make the above hypothesis.

It is well-known that making the above hypotheses (and hence making the assumption that the choices of the measurement apparatus settings and the actions of the measurement apparatus are somehow correlated) allows one to get EPR-type results. But it amounts to superdeterminism or conspiracy.

So you can now say that the Aspect results demonstrate superdeterminism or conspiracy. Fine. So ?
 

Similar threads

  • Quantum Physics
Replies
4
Views
983
  • Quantum Physics
Replies
2
Views
913
  • Quantum Physics
4
Replies
124
Views
3K
  • Quantum Physics
2
Replies
35
Views
2K
  • Quantum Physics
5
Replies
143
Views
6K
  • Quantum Physics
Replies
6
Views
1K
  • Quantum Physics
Replies
8
Views
1K
  • Quantum Physics
Replies
14
Views
2K
  • Quantum Physics
Replies
4
Views
1K
Replies
43
Views
4K
Back
Top