Does Observing a Particle in Superposition Entangle You With It?

  • Thread starter Thread starter michael879
  • Start date Start date
  • Tags Tags
    Entanglement
  • #51
vanesch said:
The way one infers causal effects is by observation of correlations, when the "cause" is "independently and randomly" selected.

I disagree. I think we can safely infer that the cause of a supernova explosion is the increase of the star's mass beyond a certain limit, without "randomly" selecting a star, bringing it inside the lab and adding mass to it.

If I "randomly" push a button, and I find a correlation between "pushed button" and "light goes on", then I can conclude normally, that the pushed button is the cause of the light going on.

I fail to see why the push need to be random. The correlation is the same.

But in superdeterminism, one cannot say anymore that "I freely pushed the button".

True.

It could be that I "just happened to push the button" each time the light went on, by previous common cause.

This is self-contradictory. You either "just happen" to push the button at the right time, either the two events (pushing the button and the light going on) are causally related.

The first case is a type of "conspiracy" which has nothing to do with superdeterminism. In a probabilistic universe one can also claim that it just happens that the two events are correlated. There is no reason to assume that a "typical" superdeterministic universe will show correlations between events in the absence of a causal law enforcing those correlations.

In the second case, I see no problem. Yeah, it may be that the causal chain is more complicated than previously thought. Nevertheless, the two events are causally related and one can use the observed correlation to advance science.

So I cannot conclude anymore that there is a causal effect "pushing the button" -> "light goes on". And as such, I cannot deduce anything anymore about closed electrical circuits or anything. There is a causal link, but it could lie in the past, and it is what made at the same time me push the button, and put on the light.

In the same way, I can only conclude from my double blind medical test that there was a common cause that made me "select randomly patient 25 to get the drug" and that made patient 25 get better. It doesn't need to mean that it was the drug that made patient 25 get better. It was somehow a common cause in the past that was both responsible for me picking out patient 25 and for patient 25 to get better.

I understand your point but I disagree. There is no reason to postulate an ancient cause for the patient's response to the medicine. In the case of EPR there is a very good reason to do that and this reason is the recovery of common-sense and logic in physics.
 
Physics news on Phys.org
  • #52
ueit said:
I understand your point but I disagree. There is no reason to postulate an ancient cause for the patient's response to the medicine. In the case of EPR there is a very good reason to do that and this reason is the recovery of common-sense and logic in physics.
Ah, I see. Your argument hangs on the idea that, although the EPR apparatus and the drug trial are conceptually equivalent examples, you think that invoking super-determinism makes the theory nicer in the case of EPR (mitigating Bell type theorems) and uglier in the case of drug trials (challenging traditional science)? My problem with this is that I think it is much nicer to explain everything uniformly, and ugly to have to make retrospective ad hoc decisions about which of the experimenters decisions were made independently (like choosing who to give the placebo, versus which of those decisions were actually predetermined in some complex manner, like choosing an axis on which to measure spin).
 
Last edited:
  • #53
ueit said:
I disagree. I think we can safely infer that the cause of a supernova explosion is the increase of the star's mass beyond a certain limit, without "randomly" selecting a star, bringing it inside the lab and adding mass to it

No, that's a deduction based upon theory which is itself based upon many many observations. In the same way I don't have to push the button to see the light go on: if I know that there is a charged battery, a switch and wires that I've checked, are well-connected, I'm pretty sure the light will go on when I push the switch without actually doing so.

But before arriving at that point, I (or our ancesters) had to do a lot of observations and inference of causal effects - some erroneous deductions still linger around in things like astrology. And it is this kind of promordial cause-effect relation that can only be established by "freely and randomly" selecting the cause, and by observing a correlation with the effect.

I fail to see why the push need to be random. The correlation is the same.

Imagine a light that flips on and off every second. Now if I push a button on and off every second, there will be a strong correlation between my pushing a button and the light going on, but I cannot conclude that there's a causal link. If you'd see me do, you'd ask "yes, but can you also STOP a bit the pushing, to see if the light follows that too". It is this "element of randomly choosen free will" which allows me to turn the observation of a correlation into an argument for a causal link.

This is self-contradictory. You either "just happen" to push the button at the right time, either the two events (pushing the button and the light going on) are causally related.

That's my point: in superdeterminism, we *think* we are "randomly" pushing the button, but there is a strong causal link (from the past) making us do so at exactly the right moment. So it is absolutely not "random" or "free" but we think so.


The first case is a type of "conspiracy" which has nothing to do with superdeterminism. In a probabilistic universe one can also claim that it just happens that the two events are correlated. There is no reason to assume that a "typical" superdeterministic universe will show correlations between events in the absence of a causal law enforcing those correlations.

I must have expressed myself badly: as you say, in a superdeterministic universe, there is of course an obscure common cause in the past which makes me push the button at exactly the time when it also causes the light to light up. Only, I *think* that I was randomly picking my pushing of the button, and so this *appears* as a conspiracy to me.

In a stochastic universe, it doesn't need to be true that "non-causal" (whatever that means in a stochastic universe!) events are statistically independent, but in that case we can indeed talk about a conspiracy.

Observationally however, both appear identical: we seem to observe correlations between randomly (or erroneously supposed randomly) chosen "cause events" and "effect events", so we are tempted to conclude a direct causal link, which isn't there: in the superdeterministic universe, there is simply a common cause from the past, and in a stochastic universe there is a "conspiracy".

In the second case, I see no problem. Yeah, it may be that the causal chain is more complicated than previously thought. Nevertheless, the two events are causally related and one can use the observed correlation to advance science.

No, one can't because the causal link is not direct (there's no "cause" and "effect", we are having two "effects" of a common cause in the past). This is like the joke of the Rolex watches and the expensive cars: You observe people with Rolex watches, and you find out that they are strongly correlated with the people who have expensive cars, so you're looking now into a mechanism by which "putting on a Rolex" makes you drive an expensive car. Of course this is because there's a common cause in the past: these people are rich! And (cause) being rich has as an effect 1 "wearing a Rolex" and effect 2 "driving an expensive car". (I'm simplifying social issues here :smile: )

But we're now in the following situation: you pick out people in the street "randomly", you put them on a Rolex watch on their wrist, and then you see that they drive an expensive car! So this would, in a "normal" universe, make you think that putting on a Rolex watch DOES make you drive instantaneously an expensive car.
In a superdeterministic universe, this came about because an obscure cause in the past made that people who were rich were going to be selected by you - even though you thought you picked them "randomly". So there's no causal effect from putting on a Rolex watch to driving an expensive car. But you would infer it because of your experiment.

I understand your point but I disagree. There is no reason to postulate an ancient cause for the patient's response to the medicine. In the case of EPR there is a very good reason to do that and this reason is the recovery of common-sense and logic in physics.

Well, the medicine might be like the Rolex watch, and the patient's response might be the expensive car.
 
  • #54
vanesch said:
Mmmm, I read a bit this article - not everything, I admit. But what seems rather strange in the left column on p 1801, is that we allow apparently the outcomes to depend on pre-correlated lists that are present in both "computers", together with the choices. But if you do that, you do not even need any source anymore: they can produce, starting from that list, any correlation you want! In other words, if the outcome at a certain moment is both a function of the time of measurement, and a pre-established list of common data, and the settings, then I could program both lists in such a way as to reproduce, for instance, EPR correlations I had previously calculated on a single Monte Carlo simulator. I do not even need a particle source anymore, the instruments can spit out their results without any particle impact. The common cause of the correlation is now to be found in the common list they share.

What this in fact proposes, is what's called superdeterminism. It is a known "loophole" in Bell's theorem: if both measurement systems have a pre-established correlation that will influence the outcomes in a specific way, it is entirely possible to reproduce any correlation you want. But it kills all kind of scientific inquiry then, because any observed correlation at any moment can always be pre-established by a "common list" in the different measurement systems.

I notice from your posts elsewhere that you still claim non-refutation of bell's theorem. Here are two papers clearly explaining some of the problems I mentioned about Bell's theorem including your purported proof above:

A Refutation of Bell's Theorem
Guillaume Adenier
http://arxiv.org/abs/quant-ph/0006014
Foundations of Probability and Physics XIII (2001)

Interpretations of quantum mechanics, and interpretations of violation of Bell's inequality
Willem M. de Muynck
http://arxiv.org/abs/quant-ph/0102066v1
Foundations of Probability and Physics XIII (2001)

These articles are well worth the read for anyone interested in this matter.

To summarize the first one, proofs of bell's theorem are not accurate mathematical models of the experiments which they purport to model. Thus contradiction between bell's theorem and experimental results is expected and does not contradict any of the premises of bell's theorem. Whereas in proofs of bells theorem, the expectation values are calculated for what would have happened if a single photon pair with the same set of local hidden variables was measured multiple times, in real experiments, a different photon pair with a different set of local hidden variables is measured each time. Thus comparing the experimental results with bell's inequality is comparing apples and oranges.

The second article shows that bell's inequality could be derived without assuming locality, and then goes on to show that although non-locality can be a reason for violation of bell's inequality, there are other more plausible local reasons for violation of bell's inequality.
 
  • #55
mn4j said:
I notice from your posts elsewhere that you still claim non-refutation of bell's theorem. Here are two papers clearly explaining some of the problems I mentioned about Bell's theorem

These are not refutations of Bell's theorem, but refutations of misunderstandings of Bell's theorem.

from p6 of the second article:
From the experimental violation of Bell’s inequality it follows that an
objectivistic-realist interpretation of the quantum mechanical formalism, encompassing
the ‘possessed values’ principle, is impossible. Violation of Bell’s
inequality entails failure of the ‘possessed values’ principle (no quadruples available).

This is what Bell claims: that there cannot be pre-determined outcomes pre-programmed in the two particles for all directions, that generate the correlations found by quantum theory. That's all. And that's not refuted.

Many people see in Bell a kind of proof of non-locality, which is wrong. It becomes a proof of non-locality when additional assumptions are made.

In MWI, for instance, Bell is explained in a totally local way.

But this is not what Bell's theorem is about. Bell's theorem proves that there cannot be a list of pre-programmed outcomes for all possible measurement results in both particles which give rise to the quantum correlations. Period.

And that's not refuted.
 
  • #56
mn4j said:
A Refutation of Bell's Theorem
Guillaume Adenier
http://arxiv.org/abs/quant-ph/0006014
Foundations of Probability and Physics XIII (2001)

This paper insists on a well-known criticism of Bell's theorem (rediscovered many times), namely the fact that one cannot perform the correlation measurements that enter into the Bell expressions by doing them on THE SAME SET of pairs of particles: one measures one correlation value on set 1, one measures the second correlation on set 2, etc...
And then it is argued that the inequality was derived from a single set of data, while the measurements are derived from 4 different sets.

But this is erroneous,for two reasons. The first reason is that the inequality is not derived from a single set of data, but FROM A PROBABILITY DISTRIBUTION. If the 4 sets are assumed to be 4 fair samples of that same probability distribution, then there is nothing wrong in establishing 4 expectation values on the 4 different fair samples. This is based upon the hypothesis of fair sampling, which is ALWAYS a necessary hypothesis in all of science. Without that hypothesis, nothing of any generality could ever be deduced. We come back to our double blind test in medecine. If a double-blind test indicates that a medecine is efficient for 80% of the cases, then I ASSUME that this will be its efficiency ON ANOTHER FAIR SAMPLE too. If the fact of having now a different fair sample puts in doubt these 80%, then the double blind test was entirely useless.

But the second reason is that for one single sample, you can never violate any Bell inequality, by mathematical requirement. Within a single sample, all kinds of correlations AUTOMATICALLY follow a Kolmogorov distribution, and will always satisfy all kinds of Bell equalities. It is mathematically impossible to violate a Bell inequality by working with a single sample, and by counting items in that sample. This is what our good man establishes in equation (35). As I said, this has been "discovered" several times by local realists.

But let's go back to equation (34). If N is very large, nobody will deny that each of the 4 terms will converge individually to its expectation value within a statistical error. If we cannot assume that the average of a large number of random variables from the same distribution will somehow converge to its expectation value, then ALL OF STATISTICS falls on its butt, and with it, most of science which is based upon statistical expectation values (including our medical tests). And when you do so, you can replace the individual terms by their expectation values, and we're back to square one.

So the whole argument is based upon the fact that when making the average of a huge number of samples of a random variable, this doesn't converge to its expectation value...
 
Last edited:
  • #57
vanesch said:
But this is erroneous,for two reasons. The first reason is that the inequality is not derived from a single set of data, but FROM A PROBABILITY DISTRIBUTION. If the 4 sets are assumed to be 4 fair samples of that same probability distribution, then there is nothing wrong in establishing 4 expectation values on the 4 different fair samples.
If you read this article carefully, you will notice that assuming 4 different fair samples WITH DIFFERENT HIDDEN VARIABLES, you end up with a different inequality, which is never violated by any experiment or by quantum mechanics.

This is based upon the hypothesis of fair sampling, which is ALWAYS a necessary hypothesis in all of science.
A sample in which the parameter being estimated is assumed to be the same is in fact a fair sample. But this is not the kind of fair sample we are interested in here. Using the example of a source of waves, the hidden variables being (amplitude, phase, frequency), the kind of fair sample you are talking about is one in which all the waves produced have exactly the same VALUES for those variables. However, the sample we are interested in for the Bell's inequality, does not have to have the same values. The only important requirement is that those variables be present. You can therefore not draw inferences about this extended sample space by using your very restricted sample space.

Without that hypothesis, nothing of any generality could ever be deduced. We come back to our double blind test in medecine. If a double-blind test indicates that a medecine is efficient for 80% of the cases, then I ASSUME that this will be its efficiency ON ANOTHER FAIR SAMPLE too. If the fact of having now a different fair sample puts in doubt these 80%, then the double blind test was entirely useless.
What you have done is to determine the 80% by testing the same individual 100 times and observing that the medicine is effective 80 times, and then after measuring 100 different people, and finding 50% you are making inference by comparing the 80% (apples) with the 50% (oranges).

Try to repeat your proof of bell's theorem considering that each sample measured has it's own hidden variable VALUE. You can not reasonably assume that all samples have exactly the same hidden variable values (which is your definition of fair sampling) because nobody has ever done any experiment in which they made sure the hidden variables had exactly the same values when measured. So again, the criticism is valid and the proof is not an accurate model of any of the performed Aspect-type experiments.

But the second reason is that for one single sample, you can never violate any Bell inequality, by mathematical requirement.
This is unproven. Nobody has ever done an Aspect-type experiment in which they measure the same photon multiple times, which is a necessary precondition to be able to verify any Bell inequality. I will wager that if such an experiment were ever done (if at all it is possible), Bell's inequality will not be violated.
Within a single sample, all kinds of correlations AUTOMATICALLY follow a Kolmogorov distribution, and will always satisfy all kinds of Bell equalities. It is mathematically impossible to violate a Bell inequality by working with a single sample, and by counting items in that sample. This is what our good man establishes in equation (35). As I said, this has been "discovered" several times by local realists.
What he shows leading up to (35) is that for a single sample, even quantum mechanics does not predict the violation of bell's inequality and therefore Bell's theorem can not be established within the weakly objective interpretation. In other words, bell's inequality is based squarely on measuring the same sample multiple times.
But let's go back to equation (34). If N is very large, nobody will deny that each of the 4 terms will converge individually to its expectation value within a statistical error.
This is false, there can be no factorization of that equation because the terms are different. Even if N is large. There is therefore no basis for this conclusion from that equation. You can not escape the conclusion that S <= 4, by saying as N becomes large S will <= 2sqrt(2)
If we cannot assume that the average of a large number of random variables from the same distribution will somehow converge to its expectation value, then ALL OF STATISTICS falls on its butt
Not true. There is no such thing as "its expectation value", when dealing with a few hidden variables with a large number of random values. Let's take a source that produces a wave with the random values of hidden variables (amplitude, phase, frequency). If this is how statistics is done, very soon people will start claiming that the "expectation value" of the amplitude as N becomes very large is zero. But if it were possible to measure the exact same wave N times, you will definitely get a different result. The latter IS the expectation value, the former is NOT.
 
Last edited:
  • #58
mn4j said:
If you read this article carefully, you will notice that assuming 4 different fair samples WITH DIFFERENT HIDDEN VARIABLES, you end up with a different inequality, which is never violated by any experiment or by quantum mechanics.

Look, the equation is the following:
1/N \sum_{i=1}^N \left( R^1_i + S^2_i + T^3_i + U^4_i \right)

And then the author concludes that for a single value of i, one has no specific limiting value of the expression R^1_i + S^2_i + T^3_i + U^4_i. But that's not the issue. The issue is that if we apply the sum, that our expression becomes:
1/N \sum_{i=1}^N R^1_i + 1/N \sum_{i=1}^N S^2_i + 1/N \sum_{i=1}^N T^3_i + 1/N \sum_{i=1}^N U^4_i

And, assuming that each set samples, 1, 2, 3 and 4 is fairly drawn from the overall distribution of hidden variables, we can conclude that:
1/N \sum_{i=1}^N T^1_i will, for a large value of N, be close to the expectation value < T > over the probability distribution of hidden variables, independently of over which fair sample (1,2,3 or 4) it has been calculated.
As such, our sum is a good approximation (for large N) of:
< R > + < S > + < T > + < U >
 
  • #59
mn4j said:
A sample in which the parameter being estimated is assumed to be the same is in fact a fair sample. But this is not the kind of fair sample we are interested in here. Using the example of a source of waves, the hidden variables being (amplitude, phase, frequency), the kind of fair sample you are talking about is one in which all the waves produced have exactly the same VALUES for those variables.

No, I'm assuming that they are drawn from a certain unknown distribution, but that that distribution doesn't change when I change my measurement settings. In other words, that I get a statistically equivalent set for measurement setting 1 and for measurement setting 2. The reason for that is that I can arbitrarily pick my settings 1, 2 ... and that at the moment of DRAWING the element from the distribution, it has not been determined yet what setting I will use. As such, I assume that the distribution of hidden variables is statistically identical for the samples 1, 2, 3, and 4, and hence that the expectation values are those of identical distributions.

However, the sample we are interested in for the Bell's inequality, does not have to have the same values. The only important requirement is that those variables be present. You can therefore not draw inferences about this extended sample space by using your very restricted sample space.

If you assume them to be statistically identical, yes you can.

What you have done is to determine the 80% by testing the same individual 100 times and observing that the medicine is effective 80 times, and then after measuring 100 different people, and finding 50% you are making inference by comparing the 80% (apples) with the 50% (oranges).

Well, even that would statistically be OK for the average. If I test for the same individual 80% chances, and then I test for another individual 70% chances, and so on... I will find a certain distribution for the "expectation values per person". If I now take a random drawing of 100 persons, and do the measurement only once, I will get a distribution, if my former individuals were fairly sampled which has the same average.

Try to repeat your proof of bell's theorem considering that each sample measured has it's own hidden variable VALUE. You can not reasonably assume that all samples have exactly the same hidden variable values (which is your definition of fair sampling) because nobody has ever done any experiment in which they made sure the hidden variables had exactly the same values when measured.

You don't assume that they have the same values in the same order, but you do assume of course that they are drawn from the same distribution. Hence averages should be the same. This is like having a population in which you pick 1000 people and you measure their weight and height. Next you pick (from the same population) 1000 other people and you measure their weight and height again. Guess what ? You'll find the same correlations twice. Even though their "hidden" variables were "different". Now, imagine that you find a strong correlation between weight and height. Now, you pick again 1000 different people (from the same population), and you measure weight and footsize. Next still 1000 different people, and you measure height and footsize. It's pretty obvious that if you take the correlation of weight with footsize, and it is strong, that you ought to find also a strong correlation between height and footsize.
What you are claiming now (what the paper is claiming) is that, because we've measured these correlation on DIFFERENT SETS of people that this shouldn't be the case, even if when we do this on a single set of 1000 people, we would find this.
 
Last edited:
  • #60
vanesch said:
No, I'm assuming that they are drawn from a certain unknown distribution, but that that distribution doesn't change when I change my measurement settings. In other words, that I get a statistically equivalent set for measurement setting 1 and for measurement setting 2. The reason for that is that I can arbitrarily pick my settings 1, 2 ... and that at the moment of DRAWING the element from the distribution, it has not been determined yet what setting I will use. As such, I assume that the distribution of hidden variables is statistically identical for the samples 1, 2, 3, and 4, and hence that the expectation values are those of identical distributions.

Yes, you are assuming that each time the experiment is performed, the hidden variable values of the photons leaving the source are randomly selected from the same distribution of hidden variable values. How then can you know that you are infact selecting the values in a random manner without actually knowing how the behaviour of the hidden variable. You still do not understand the fact that nobody has ever done this experiment the way you are assuming it. Nobody has ever taken steps to ensure that the distribution of the samples is uniform as you claim, mere repeatition multiple times is not enough as such an experiment system will be easily fooled by a time dependent hidden variable or a source in which the hidden variable value of the second photon pair emitted is related to the hidden variable value of the first photon pair emitted. Thus, the system you model imposes a drastically reduced hidden-variable space, and does not accurately model actuall Aspect-type experiments.

If you assume them to be statistically identical, yes you can.
As I have pointed out already above, this assumption unnecessarily limits the hidden variable space, and has never been enforced in real Aspect type experiments. The critique stands!

Well, even that would statistically be OK for the average. If I test for the same individual 80% chances, and then I test for another individual 70% chances, and so on... I will find a certain distribution for the "expectation values per person". If I now take a random drawing of 100 persons, and do the measurement only once, I will get a distribution, if my former individuals were fairly sampled which has the same average.
But that's not what you are doing. What you are actually doing is deriving an inequality for measuring a single individual 100 times,and using that to compare with actually measuring 100 different individuals. For the analogy to work you must never actually measure a single individual more than one time, since nobody has ever actually done that in any Aspect time experiment.

You don't assume that they have the same values in the same order, but you do assume of course that they are drawn from the same distribution. Hence averages should be the same. This is like having a population in which you pick 1000 people and you measure their weight and height. Next you pick (from the same population) 1000 other people and you measure their weight and height again. Guess what ? You'll find the same correlations twice. Even though their "hidden" variables were "different". Now, imagine that you find a strong correlation between weight and height. Now, you pick again 1000 different people (from the same population), and you measure weight and footsize. Next still 1000 different people, and you measure height and footsize. It's pretty obvious that if you take the correlation of weight with footsize, and it is strong, that you ought to find also a strong correlation between height and footsize.
If you take 1000 persons and measure their height and weight exactly once each, it will tell you absolutely nothing about what you will obtain if you measure a single person 1000 times. If you find a correlation between weight and footsize in the 1000 measurements of the same individual, the ONLY correct inference is that you have a systematic error in your equipment. However if you find a correlation between weight and footsize in the 1000 measurements from different individuals, there are two possible inferences neither of which you can reasonably eliminate without further experimentation:
1- systematic error in equipment
2- Real relatioship between weight and footsize

It would be fallacious to interpret the correlation in the single person/multiple measurement result as meaning there is a real relationship between the weight and footsize.

What you are claiming now (what the paper is claiming) is that, because we've measured these correlation on DIFFERENT SETS of people that this shouldn't be the case, even if when we do this on a single set of 1000 people, we would find this.
No! What the paper is claiming, is the following, in the words of the author:
It was shown that Bell’s Theorem cannot be derived, either within a strongly objective interpretation of the CHSH function, because Quantum Mechanics gives no strongly objective results for the CHSH function, or within a weakly objective interpretation, because the only derivable local realistic inequality is never violated, either by Quantum Mechanics or by experiments.
...
Bell’s Theorem, therefore, is refuted.
 
Last edited:
  • #61
mn4j said:
Yes, you are assuming that each time the experiment is performed, the hidden variable values of the photons leaving the source are randomly selected from the same distribution of hidden variable values. How then can you know that you are infact selecting the values in a random manner without actually knowing how the behaviour of the hidden variable.

This is exactly what you assume when you do "random sampling" of a population. Again, if you think that there are pre-determined correlations between measurement apparatus, or timing or whatever, then you are adopting some kind of super determinism, and you would be running in the kind of problems we've discussed before even with medical tests.


You still do not understand the fact that nobody has ever done this experiment the way you are assuming it. Nobody has ever taken steps to ensure that the distribution of the samples is uniform as you claim, mere repeatition multiple times is not enough as such an experiment system will be easily fooled by a time dependent hidden variable or a source in which the hidden variable value of the second photon pair emitted is related to the hidden variable value of the first photon pair emitted.

You can sample the photons randomly in time. You can even wait half an hour between each pair you want to observe, and throw away all the others. If you still assume that there is any correlation between the selected pairs, then this equivalent to superdeterminism.
That is like saying that there is a dependency between picking the first and the second patient that will get the drug, and between the first and the second patient that will get the placebo.

As I have pointed out already above, this assumption unnecessarily limits the hidden variable space, and has never been enforced in real Aspect type experiments. The critique stands!

You might maybe know that especially in the first Aspect experiments, the difficulty was the inefficiency of the setup, which made the experiment have a very low countrate. As such, the involved pairs of photons where separated by very long time intervals as compared to the lifetime of a photon in the apparatus (we talk here of factors 10^12).
There is really no reason (apart from superdeterminism or conspiracies) to assume that the second pair had anything to do with the first.

If you take 1000 persons and measure their height and weight exactly once each, it will tell you absolutely nothing about what you will obtain if you measure a single person 1000 times. If you find a correlation between weight and footsize in the 1000 measurements of the same individual, the ONLY correct inference is that you have a systematic error in your equipment. However if you find a correlation between weight and footsize in the 1000 measurements from different individuals, there are two possible inferences neither of which you can reasonably eliminate without further experimentation:
1- systematic error in equipment
2- Real relatioship between weight and footsize

Yes, but I was not talking about 1 person measuring 1000 times and 1000 persons measuring 1 time each, I was talking about measuring 1000 persons 1 time each, and then measuring 1000 OTHER persons 1 time each again.

You do realize that the 4 samples in an Aspect type experiment are taken "through one another" do you ?
You do a setting A, and you measure an element of sample 1
you do setting B and you measure an element of sample 2
you do a setting A again, and you measure the second element of sample 1
you do a setting D and you measure an element of sample 4
you do a setting C and you measure an element of sample 3
you do a setting A and you measure the third element of sample 1
you ...

by quickly changing the settings of the polarizers for each measurement.
And now you tell me that the first, third and sixth measurement are all "on the same element" ?
 
  • #62
Vanesh,

I think that your require too much from a scientific theory. You require it to be true in some absolute sense.

In the case of the medical test in a superdeterministic universe, the theory that the medicine cured the patient is perfectly good from a practical stand point as it will always predict the correct result. The fact that, unknown by us, there is a different cause in the past does not render the theory useless. It is wrong, certainly, but probably not worse than all our present scientific theories.

Every physical theory to date, including QM and GR is wrong in an absolute sense but we still are able to make use of them.
 
  • #63
ueit said:
In the case of the medical test in a superdeterministic universe, the theory that the medicine cured the patient is perfectly good from a practical stand point as it will always predict the correct result. The fact that, unknown by us, there is a different cause in the past does not render the theory useless. It is wrong, certainly, but probably not worse than all our present scientific theories.

This is entirely correct, and is an attitude that goes with the "shut up and calculate" approach. Contrary to what you think - and if you read my posts then you should know this - I don't claim at all that our current theories are in any way "absolutely true". I only say that *if* one wants to make an ontology hypothesis (that means, IF one wants to pretend that they are true in some sense) then such and so, knowing that this is only some kind of game to play. But it is *useful* to play that game, for exactly the practical reason you give above.

Even if it is absolutely not true that taking a drug cures you, and that taking a drug only comes down to doing something that was planned long ago, and that the same cause is also making that you will get better, our pharmacists have a (in that case totally wrong) way of thinking how the drug is acting in the body and curing you, and they better stick to their wrong picture which helps them make "good drugs" (in the practical sense), than convince them that they don't understand anything about how drugs work in the human body, which would then render it impossible for them to design new drugs, given that their design procedures are based upon a totally wrong picture of reality.
So if nature "conspires" to make us think that drugs cure people (even if it is just a superdeterministic correlation), then it is practically seen, a good idea to devellop an ontological hypothesis in which people get cured by drugs.

It is in this light that I see MWI too: even if it is absolutely not true in an ontological sense, if nature conspires to make us think that the superposition principle is correct, then it is a good idea to devellop an ontological hypothesis in which this superposition principle is included. Whether this is "really true" or not: you will get a better intuition for quantum theory, in the same way the pharmacist will get a better feeling for the design of drugs based upon his wrong hypothesis that it are the drugs that cure the people.
 
  • #64
vanesch said:
You can sample the photons randomly in time.
This CAN NOT be done, unless you know the time-behavior of the variables. You seem to be assuming that each variable has a single value with a simple normal distribution. What if the value of a variable changes like a cos(kw + at) function over time. If you don't know this before hand, there is no way you can determine by random sampling the exact behavior of the function. If you take "random" samples of this function, you end up with a rather flat distribution, which does not tell you anything about the behavior variable.

vanesch said:
There is really no reason (apart from superdeterminism or conspiracies) to assume that the second pair had anything to do with the first.
On the contrary, the mere fact that they come from the same source gives me more than ample reason, no conspiracy. We are trying to find hidden variables here are we not? Therefore to make an arbitrary assumption without foundation that the emission of the first pair of photons does not change the source characteristics in a way that can affect the second pair is very unreasonable. No matter how long the time is between the emissions. Do you have any scientific reason to believe that hidden variables MUST not have that behavior?

vanesch said:
Yes, but I was not talking about 1 person measuring 1000 times and 1000 persons measuring 1 time each, I was talking about measuring 1000 persons 1 time each, and then measuring 1000 OTHER persons 1 time each again.
Yes, and it does not change the fact that your results will tell you absolutely nothing about what you would obtain by measuring a single person 1000 times.
vanesch said:
You do realize that the 4 samples in an Aspect type experiment are taken "through one another" do you ?
You do a setting A, and you measure an element of sample 1
you do setting B and you measure an element of sample 2
you do a setting A again, and you measure the second element of sample 1
you do a setting D and you measure an element of sample 4
you do a setting C and you measure an element of sample 3
you do a setting A and you measure the third element of sample 1
you ...

by quickly changing the settings of the polarizers for each measurement.
And now you tell me that the first, third and sixth measurement are all "on the same element" ?
No. I'm telling you that the results of this experiment can not and should not be compared with calculations based on measuring a single element multiple times. Your experiment will tell you about ensemble averages, but it will never tell you about the behavior of a single element.
 
Last edited:
  • #65
It may be more helpful to consider thought experiments for which (unitary, no fundamental collapse) quantum mechanics makes different predictions. I think that David Deutsch has given one such example involving an artificially intelligent observer implemented by a quantum computer. I don't remember the details of this thought experiment, though...
 
  • #66
mn4j said:
This CAN NOT be done, unless you know the time-behavior of the variables. You seem to be assuming that each variable has a single value with a simple normal distribution.

I'm assuming that whatever is the time dependence of the variables, that this should not be correlated with the times of the measurement, and there is an easy way to establish that: change the sampling rates, sample at randomly generated times... If the expectation values are always the same, we can reasonably assume that there is no time correlation. Also if I have long times between the different elements of a sample, I can assume that there is no time coherence left.
I make no other assumption of the distribution of the hidden variables, than their stationnarity.

What if the value of a variable changes like a cos(kw + at) function over time. If you don't know this before hand, there is no way you can determine by random sampling the exact behavior of the function.

No, but I can determine the statistical distribution of the samples taken at random times of this function. I can hence assume that if I take random time samples, that I draw them from this distribution.


If you take "random" samples of this function, you end up with a rather flat distribution, which does not tell you anything about the behavior variable.

First of all, it won't be flat, it will be peaked at the sides. But no matter. That is sufficient. If I assume that the variable is "flatly" distributed in this way, that's good enough, because that is what this variable IS, when the sampletimes are incoherently related to the time function.

On the contrary, the mere fact that they come from the same source gives me more than ample reason, no conspiracy. We are trying to find hidden variables here are we not? Therefore to make an arbitrary assumption without foundation that the emission of the first pair of photons does not change the source characteristics in a way that can affect the second pair is very unreasonable.

That is not unreasonable at all, because the "second pair" will be in fact, the trillionth pair or something. In order for your assumption to hold, the first pair should influence EXACTLY THOSE pairs that we are going to decide to measure, maybe half an hour later, when we decided arbitrarily to change the settings of the polarizers exactly to the same settings.

It is then very strange that we never see any variation in the expectation values of any of the samples, no matter if we sample 1 microsecond later, or half an hour later, ... but that this change is EXACTLY what is needed to produce Bell-type correlations. This is nothing else but an assumption of superdeterminism or of conspiracy.

No matter how long the time is between the emissions. Do you have any scientific reason to believe that hidden variables MUST not have that behavior?

Well, as I said, that kind of behaviour is superdeterminism or conspiracy, which is by hypothesis not assumed in Bell's theorem, as he starts out from hidden variables that come from the same distribution for each individual trial, and the reason for that is the assumption (spelled out in the premisses of Bell's theorem) that the "free choice" is really a free, and hence statistically independent choice of the settings, and the assumption that the measurement apparatus is deterministically and in a stationary way giving the outcome as a function of the received hidden variable.

No. I'm telling you that the results of this experiment can not and should not be compared with calculations based on measuring a single element multiple times. Your experiment will tell you about ensemble averages, but it will never tell you about the behavior of a single element.

Sure. But the theorem is about ensemble averages of a stationary distribution. That's exactly what Bell's theorem tells us: that we cannot reproduce the correlations as ensemble averages of a single stationary distribution which deterministically produces all possible outcomes.

Assuming that the distributions are stationary, we are allowed to measure these correlations on different samples (drawn from the same distribution).

As such, the conclusion is that they cannot come from a stationary distribution. That's what's Bell's theorem tells us. Not more, not less.

So telling me that the distributions are NOT stationary, but are CORRELATED with the settings of the measurement apparatus (or equivalent, such as the sample times...), and that the measurements are not deterministic as a function of the elements of the distribution, is nothing else but denying one of the premisses of Bell's theorem. One shouldn't then be surprised to find other outcomes.

Only, if you assume that the choices are FREE and UNCORRELATED, you cannot make the above hypothesis.

It is well-known that making the above hypotheses (and hence making the assumption that the choices of the measurement apparatus settings and the actions of the measurement apparatus are somehow correlated) allows one to get EPR-type results. But it amounts to superdeterminism or conspiracy.

So you can now say that the Aspect results demonstrate superdeterminism or conspiracy. Fine. So ?
 

Similar threads

Replies
2
Views
2K
Replies
124
Views
8K
Replies
18
Views
2K
Replies
6
Views
2K
Replies
143
Views
10K
Replies
14
Views
3K
Back
Top