Sunil
- 227
- 109
Alice' polarizer influences the configuration of the Alice' particle, via direct contact. Alice' particle interacts with Bobs particle using entanglement created by the preparation. For the details, look at the Bohmian velocity.Nullstein said:The interaction between Alice's polarizer and Alice's particle is plausible and non-magical, because they are in direct contact with each other. However, Alice's polarizer is not in direct contact with Bob's particle and neither in direct contact with the TV in the livingroom. So why would it be plausible that her polarizer can modify Bob's particle but not turn on the TV in the livingroom?
The "just" in "just a statistical feature" is your interpretation. The preparation procedure was applied to all individual particles. So, it can lead (and does lead) to shared behavior.Nullstein said:Entanglement is just a statistical feature of the whole population of identically prepared systems and not a property of the individual particles.
This argument makes no sense to me.Nullstein said:For example, if the measurement axes are not aligned, the correlation may be only 10%, so it can't be an individual property of the particles and just shows up in the statistics of the whole ensemble.
Because you say so?Nullstein said:No I didn't forget that. I've shown you a recipe to construct a common cause event in the past given two events in the present under the assumption that they are linked by a non-local cause-and-effect relationship and the assumption that they had the chance to interact at some time in the past (as would be the case in a Big Bang scenario). This common cause satisfies the required conditional probability relations.
Given that a "superdeterministic explanation" can be given to everything (which shows that it has nothing in common with an explanations) this is a triviality not worth to be mentioned.Nullstein said:Under the given assumptions, the existence of a non-local explanation implies the existence of a superdeterministic explanation.
No, no superdeterministic theory can make falsifiable predictions.Nullstein said:A superdeterministic theory can make statistical predictions which can be falsified, just like any other theory. The situation is not worse than in any other hidden variable theory. We cannot falsify the claim that there are hidden variables, but we can draw conclusions from the theory and falsify it, if the predictions don't match the experiment.
Of course, this depends on what one names a superdeterministic theory. A theory which claims that some experimenters have cheated and, instead of using really random preparation, used knowledge about some initial data has also a correlation between the experimenters decision and the initial data. If you name this a "superdeterministic theory", then, indeed, a "superdeterministic theory" can be a normal falsifiable theory. But in this case, there is a simple straightforward prescription how to falsify it: Use a different method of doing the experimenters decision. Say, add a pseudorandom number generator and use the number to modify the decision. If the effect remains unchanged, then this particular conspiracy theory is falsified. And given this possibility, for the discussion of Bell's inequality, theories of this type are obviously worthless, because the effect is already known to appear in very many different methods of making the experimenters choices.
For me, a superdeterministic theory requires more, namely that it goes beyond the simple "cheating experimenters" theory. That means, it has to explain a correlation between initial data and experimenters decisions for every prescription of how the experimenters prescription has been made. Which, for the start, includes a combination of the output of pseudorandom number generators, light sources from the opposite site of the universe, and some simple Geiger counter. Note also that the correlation should be always present, and have a large enough size, enough to allow the observed quantum violations of the BI.