I Is this popular description of entanglement correct?

  • Thread starter Thread starter entropy1
  • Start date Start date
  • Tags Tags
    entanglement
  • #91
Nullstein said:
The interaction between Alice's polarizer and Alice's particle is plausible and non-magical, because they are in direct contact with each other. However, Alice's polarizer is not in direct contact with Bob's particle and neither in direct contact with the TV in the livingroom. So why would it be plausible that her polarizer can modify Bob's particle but not turn on the TV in the livingroom?
Alice' polarizer influences the configuration of the Alice' particle, via direct contact. Alice' particle interacts with Bobs particle using entanglement created by the preparation. For the details, look at the Bohmian velocity.

Nullstein said:
Entanglement is just a statistical feature of the whole population of identically prepared systems and not a property of the individual particles.
The "just" in "just a statistical feature" is your interpretation. The preparation procedure was applied to all individual particles. So, it can lead (and does lead) to shared behavior.

Nullstein said:
For example, if the measurement axes are not aligned, the correlation may be only 10%, so it can't be an individual property of the particles and just shows up in the statistics of the whole ensemble.
This argument makes no sense to me.
Nullstein said:
No I didn't forget that. I've shown you a recipe to construct a common cause event in the past given two events in the present under the assumption that they are linked by a non-local cause-and-effect relationship and the assumption that they had the chance to interact at some time in the past (as would be the case in a Big Bang scenario). This common cause satisfies the required conditional probability relations.
Because you say so?

Nullstein said:
Under the given assumptions, the existence of a non-local explanation implies the existence of a superdeterministic explanation.
Given that a "superdeterministic explanation" can be given to everything (which shows that it has nothing in common with an explanations) this is a triviality not worth to be mentioned.
Nullstein said:
A superdeterministic theory can make statistical predictions which can be falsified, just like any other theory. The situation is not worse than in any other hidden variable theory. We cannot falsify the claim that there are hidden variables, but we can draw conclusions from the theory and falsify it, if the predictions don't match the experiment.
No, no superdeterministic theory can make falsifiable predictions.

Of course, this depends on what one names a superdeterministic theory. A theory which claims that some experimenters have cheated and, instead of using really random preparation, used knowledge about some initial data has also a correlation between the experimenters decision and the initial data. If you name this a "superdeterministic theory", then, indeed, a "superdeterministic theory" can be a normal falsifiable theory. But in this case, there is a simple straightforward prescription how to falsify it: Use a different method of doing the experimenters decision. Say, add a pseudorandom number generator and use the number to modify the decision. If the effect remains unchanged, then this particular conspiracy theory is falsified. And given this possibility, for the discussion of Bell's inequality, theories of this type are obviously worthless, because the effect is already known to appear in very many different methods of making the experimenters choices.

For me, a superdeterministic theory requires more, namely that it goes beyond the simple "cheating experimenters" theory. That means, it has to explain a correlation between initial data and experimenters decisions for every prescription of how the experimenters prescription has been made. Which, for the start, includes a combination of the output of pseudorandom number generators, light sources from the opposite site of the universe, and some simple Geiger counter. Note also that the correlation should be always present, and have a large enough size, enough to allow the observed quantum violations of the BI.
 
  • Like
Likes mattt
Physics news on Phys.org
  • #92
Non-locality is a minor problem
bhobba said:
As Peter keeps correctly pointing out, statements like the above depend on what you mean by locality.
No problem, in this case I talk about Bell locality.
bhobba said:
Also, it expresses a personal reaction that is different to scientific fact.
Of course, if I look at the facts I have presented and make a conclusion, this conclusion is a personal reaction. This does not make it non-objective or so. If I compute 2+2=4, this is also only my personal computation, I can make errors in my computations. So what?

That science, and in particular the theory of gravity together with its applications in astronomy, developed successfully in the time of non-local Newtonian gravity is a fact. So we have empirical evidence from history of science that science works nicely if based on non-local theories.

The point that non-local theories appear in a natural way as limits of local theories if the limiting velocity of causal influences goes to infinity is also a quite simple fact. That means, all one needs to have a non-local theory being viable is that that limiting velocity is too large to be measured.

Both points cannot be made for superdeterminism. Given that it claims that something assumed to be zero by normal causal theory can be nonzero prevents superdeterminism being a limit of normal causal theories. And my claim remains that superdeterminism, if taken seriously (instead of being ignored everywhere except in the particular case of the BI violations to get rid of the non-locality which would destroy relativistic metaphysics) would be the end of science.
bhobba said:
Bell is airtight, and its experimental confirmation is one of the outstanding achievements of 20th Century science, but how you view it does lead to subtleties.
I try to view it without following relativistic prejudices, that's all. But relativistic prejudices are quite strong feelings. Which is what has to be expected, given that those who study physics plausibly have been impressed by Einstein's argumentation for relativity already during their childhood. If they have not been impressed by this argumentation and liked it, they would hardly have decided to study physics. Such childhood impressions, if not questioned and rejected during the youth, tend to become dogma and create strong emotional reactions against the persons who question them. The readiness to question everything does not extend to relativistic metaphysics. Not strange if one takes into account that the readiness to question everything of common sense was learned together with relativistic metaphysics.

The relativistic argument itself is a valid one: There is a symmetry in the experiences, and the simplest way to explain it is to assume that it is a fundamental symmetry, a symmetry of reality. But this is not that strong. It is sufficient to present a derivation of relativistic symmetry for observables from a non-symmetric reality to meet it.
 
  • #93
Sunil said:
Those who are so simple and regular that a causal explanation can be found.
I think we have a different understanding about what that function is, so I'll give a more detailed account of its calculation.

We start with a system of N charges corresponding to a physically possible microscopic state of the whole experiment. We input this state in a simulation on some supercomputer. The simulation will output 8 symbols:

Ax, Ay, Az - the spin components (hidden variables) of particle A
Bx, By, Bz - the spin components (hidden variables) of particle B
D1 (orientation of detector 1 at the time of detection)
D2 (orientation of detector 2 at the time of detection)

So, an output like +++, --+,X,Y would correspond to an experimental run where the spins of particle A are UP,UP,UP in the X,Y and Z directions respectively, the spins of particle B are DOWN,DOWN,UP in the X,Y and Z directions respectively and the particles are measured on X (at detector D1) and on Y (at detector D2).

We repeat the simulation for a large number of initial states so that we can get a statistically representative sample. The initial states are chosen randomly, so no conspiracy or fine-tuning is involved. In the end we just count how many times a certain value of the hidden variables correspond to each detector settings. Our function will simply give you the probability of getting a certain hidden variable for a certain type of measurement.

As you can see, this function is fixed. It's just the statistical prediction of the theory (say classical EM) for this experiment. The function itself is trivially simple, but it is based on a huge amount of calculations used to solve those N-body EM problems. You CAN choose whatever initial state you want (random number generators and such) but you cannot mess with the function.

OK, so now we can understand what Bell's independence assumption amounts to. It posits that all probabilities must be the same. My question to you is simple. Why? I think it is entirely possible that those probabilities would not be the same. Some combinations could have 0 probability because, say, no initial state can evolve into them.

As far I can tell, assuming the probabilities must be the same is completely unjustified, or, using your words, is pure nonsense.

It's also nonsense to claim that if the probabilities come out differently we should abandon science. As you can see, superdeterminism is perfectly testable, you can get predictions from it and compare to experiment.
Sunil said:
Science depends on the possibility to make independent choices for experiments.
Explain me why the above test based on computer simulations where choices are not independent is not science.

Sunil said:
The classical Maxwell equations have a limiting speed, namely c. This is sufficient to prove the Bell inequalities for space-like separated measurements.
Again, classical EM involves long-range interactions. Only after the function is calculated you can see if the independence assumption is satisfied or not.

Sunil said:
You can use some value to define the actual orientation of the device so that this turning cannot causally influence the initial state.
You cannot. determinism implies that for a certain initial state the evolution is unique.

Sunil said:
The conspiracy is that this "related" translates into a correlation. This happens to be only in exceptionally simple circumstances - those circumstances where even we human beings are usually able to identify causal explanations.
a N body system where N is 10^26 or so is not simple. You need to solve the equations to get the predictions. Since the initial states are picked at random, there is no conspiracy involved.

Sunil said:
Not sure but plausible. But there are also all the gas particles in the air, and the atoms of the body of the guy who throws the coin.
We can rigorously prove that macroscopic neutral objects do not interact at the distance by calculating the Van der Waals forces between them. They are almost 0 when collisions are not involved. So, we can justify the independence assumption theoretically.

In the case of direct collisions (contact between the body and the coin) there is no independence, sure, and nobody would claim it is.

In the case of fluid mechanics we can simply rely on experiment. If the Navier–Stokes equations give good predictions we can deduce that the microscopic state of the fluid can be disregarded for that specific experiment. no need to make unjustified assumptions.
Sunil said:
One can, of course, reduce the possible outcomes to a discrete number of large subsets, and then it is possible that there will be only a single one, so no statistics involved. But these are exceptions, not the rule. Measurement errors are usually sufficient to force you to use statistics.
This is different. Sure, there are measurement errors and you can try to reduce them by repeating the experiment and use the average value. But this has nothing to do with the measurement settings being free parameters.

I want to measure the temperature of something. I put a thermometer there. Nobody cares if my decision to make that measurement at that time and in that specific way was free or not. The only important thing is that the report contains all those decisions so that others can reproduce the experiment and rely on its result.
 
Last edited:
  • #94
PeterDonis said:
I said no such thing. All I said was that QFT does not say anything about "causal relationships" at all.
Yes, but there is a principle of logic (the law of excluded middle) that says that the following statements:

P1. A caused B
P2. A did not cause B

cannot both be true.

What you are doing is oscillate between them, which is logically fallacious. If i say that EPR proves that you either need A to cause B or hidden variables you say that in there is no experimental distinction between A causing A and A not causing B. But this is irrelevant. One of them has to be false. Which one?
 
  • #95
AndreiB said:
there is a principle of logic (the law of excluded middle)
Which only applies if the binary concept being used is in fact well-defined for the domain being discussed. If "cause" is not well-defined (and so far, despite my repeated requests, you have not given a testable definition of "cause"), then sentences involving that concept are meaningless, so you can't apply logic using them.
 
  • Like
Likes bhobba
  • #96
PeterDonis said:
Which only applies if the binary concept being used is in fact well-defined for the domain being discussed. If "cause" is not well-defined (and so far, despite my repeated requests, you have not given a testable definition of "cause"), then sentences involving that concept are meaningless, so you can't apply logic using them.
Logic does not require all concepts to be experimentally testable. And there is nothing meaningless about the concept of cause. In the case of EPR, "A caused B" means that the spin measured at A (say UP) changed B from whatever state it was before (that includes no state at all) to a spin state of DOWN. So, you can replace the word "caused" to the word "changed". So, you need to choose between these options:

P1. The measurement result changed B from whatever it was before to a DOWN state.
P2. The measurement result did not change B from whatever it was before to a DOWN state.
 
  • #97
AndreiB said:
Logic does not require all concepts to be experimentally testable.
If you can't give a testable definition of "cause", then it's not well defined if you're trying to make general claims based on "cause".

You can, of course, define what "cause" means in a particular model you happen to prefer even if that definition doesn't make "cause" testable in your model. But that doesn't require me to accept your definition or your model. And you certainly can't use such a model to make general assertions.

AndreiB said:
you can replace the word "caused" to the word "changed".
Doesn't change anything about what I've said.
 
  • #98
DrChinese said:
1. In this paper*, there is no model presented that explains Bell correlations.
No, but he derives QM, so, implicitely, his model makes the same predictions as QM.

DrChinese said:
As always, I challenge anyone (and especially 't Hooft) to take the DrChinese challenge for their local realistic (or superdeterministic as the case may be) model. If there are values for various choices of measurement angles (which I choose, or think I choose), what are they for angle settings 0/120/240 degrees (entangled Type I PDC photon pairs)? The challenger provides the results, I pick the angle settings.
This is the point of superdeterminism. You CANNOT pick the settings. They are determined by the initial state of the system. What you can pick is the initial state of the whole experiment, after that you cannot touch the experiment.
DrChinese said:
According to the 't Hooft hypothesis, I will always pick pairs that yield the correct quantum expectation value.
Yes, because you are part of the universe (the CA) and you have to obey the rules of the CA. The decisions you make are already "there" in the past state of the CA.

DrChinese said:
How is it that, I sitting here at a remote keyboard, am forced to select angle pairs that come out to a 25% "apparent" match rate when the "true" match rate - according to local realism - is over 33%?
The "true match rate" you are speaking about is only expected when no interactions exist (like in the case of distant Newtonian billiard balls). If interactions exist, the "true match rate" has to be determined based on the initial state of the whole experiment, as explained in my post #93. So, you need to calculate the rate based on the CA rules, not based on Newtonian rigid-body mechanics. If 't Hooft's math is correct, the "true match rate" of his CA is the same as QM's one.

DrChinese said:
2. Clearly defined? Exactly how are Alice and Bob's apparently independent choice of settings tied to the measurement outcomes?
These are Alice's possible states: +-+-+-+-+- and -+-+-+-+-+ (you can think of + and - as representing the charge distribution of Alice, or CA states). Same charges do not like being close to each other, this is why we only have those two states.

These are the possible states of the hidden variable: +-, -+

In the no interaction case these experimental states are possible:

1. Alice: +-+-+-+-+- HV: +-
2. Alice: +-+-+-+-+- HV: -+
3. Alice: -+-+-+-+-+ HV: +-
4. Alice: -+-+-+-+-+ HV: -+

If there is interaction, states 2 and 3 are impossible (same charges are facing each other), so the statistics is different.
 
  • #99
PeterDonis said:
If you can't give a testable definition of "cause", then it's not well defined if you're trying to make general claims based on "cause".
I defined "cause". It means "change" in this context.
PeterDonis said:
You can, of course, define what "cause" means in a particular model you happen to prefer even if that definition doesn't make "cause" testable in your model. But that doesn't require me to accept your definition or your model. And you certainly can't use such a model to make general assertions.
I can certainly define what a state change means in QM. A Z-spin DOWN state will give a DOWN result on Z with 100% certainty. Any other state will not give a DOWN result on Z with 100% certainty. If at T1 you have a DOWN state and at T2 you have a different state, that qualifies as a change.

We know that, after the A measurement (UP), B is in a DOWN state. Was B in the same DOWN state before the measurement of A?
 
  • #100
AndreiB said:
This is the point of superdeterminism.
The point of "Superdeterminism" is simple: The initial conditions of the Universe were arranged that way that all measurements performed were and are consistent with the predictions of quantum mechanics.
 
  • Like
Likes EPR
  • #101
AndreiB said:
It's also nonsense to claim that if the probabilities come out differently we should abandon science. As you can see, superdeterminism is perfectly testable, you can get predictions from it and compare to experiment.
The placebo effect is perfectly testable too. Nevertheless, it makes your experimental results pretty useless, if you don't implement measures to get it under control and reduce it. We don't have to give up science, but we have to give up the hope to learn much from data affected by it ("unjustified indepencence assumptions") in an uncontrolled way.
 
  • #102
gentzen said:
The placebo effect is perfectly testable too. Nevertheless, it makes your experimental results pretty useless, if you don't implement measures to get it under control and reduce it. We don't have to give up science, but we have to give up the hope to learn much from data affected by it ("unjustified indepencence assumptions") in an uncontrolled way.
In medicine we have the problem that a lot of phenomena are not understood. The placebo effect simply means that the psychological state of the patient matters. Even if you cannot eliminate this aspect completely you could investigate the reason behind the effect and take that reason (say the presence of some chemicals in the brain) into account. Of course, this makes research harder, but it is also of a greater quality since you get a deeper understanding of the drug's action. In any case, it's not useless.

In other branches of science we understand pretty well what interacts with what so we can design the experiment accordingly. Astronomers don't just assume that stars move independently of each other because the calculation would be easier. They first look at the interactions between them and then try to model their behavior as it is, not as they like it to be. For some reason, the EM interaction is treated differently (we know it exists but we ignore it) and this leads to wrong expectancies regarding Bell tests.
 
  • #103
AndreiB said:
We start with a system of N charges corresponding to a physically possible microscopic state of the whole experiment. We input this state in a simulation on some supercomputer. The simulation will output 8 symbols:

Ax, Ay, Az - the spin components (hidden variables) of particle A
Bx, By, Bz - the spin components (hidden variables) of particle B
D1 (orientation of detector 1 at the time of detection)
D2 (orientation of detector 2 at the time of detection)

We repeat the simulation for a large number of initial states so that we can get a statistically representative sample. The initial states are chosen randomly, so no conspiracy or fine-tuning is involved. In the end we just count how many times a certain value of the hidden variables correspond to each detector settings. Our function will simply give you the probability of getting a certain hidden variable for a certain type of measurement.

As you can see, this function is fixed. It's just the statistical prediction of the theory (say classical EM) for this experiment. The function itself is trivially simple, but it is based on a huge amount of calculations used to solve those N-body EM problems. You CAN choose whatever initial state you want (random number generators and such) but you cannot mess with the function.

OK, so now we can understand what Bell's independence assumption amounts to. It posits that all probabilities must be the same.
Not clear what this means. The independence assumption is ##P(Ai=+|D2=I) = P(Ai=+)##, similarly for B and D1.
AndreiB said:
My question to you is simple. Why?
I use for the preparation of that experiment the claim you made later:
AndreiB said:
So, we can justify the independence assumption theoretically.
I store the seeds for some pseudorandom number generators in devices near A resp. B sufficiently isolated so that you can justify (with whatever means, not my problem) the independence assumption for this seed. Then the boxes will be opened a short moment before the measurements and the values Ri given by these random number generators will be used in such a way that they effectively modify the D1 resp. D2 by ##Di \to Di + Ri \mod 3##.

By construction and your independence claim I can be quite sure that the Ri are independent of the Ai,Bi, and even if the Di before the final preparation operation have a nontrivial correlation which would be sufficient to violate the Bell inequalities, nothing remains from this in the sum.

I know, a little bit unfair to combine here two different parts of your argumentation. But you have no choice here: Either you acknowledge that there are ways to make sure that there is independence - then I will use these ways to construct a Bell theorem test where we can make sure that there is independence of the decisions of the experimenters from the initial state of the pair. This will be, then, a Bell test safe against your version of superdeterminism. Or you cannot do it, then my point is proven that with superdeterminism statistical experiments are dead.
AndreiB said:
a N body system where N is 10^26 or so is not simple. You need to solve the equations to get the predictions. Since the initial states are picked at random, there is no conspiracy involved.
And that's why there will be no correlation. If you doubt, make such computations yourself, with the largest amount of what you can do. You will not find any correlations. Except for cases where it is possible to explain them in a sufficiently simple way.

AndreiB said:
I want to measure the temperature of something. I put a thermometer there. Nobody cares if my decision to make that measurement at that time and in that specific way was free or not. The only important thing is that the report contains all those decisions so that others can reproduce the experiment and rely on its result.
Just to clarify what we are arguing here about. Looks like you want to argue that superdeterminism can be somehow restricted for macroscopic bodies if they are in sufficiently stable states or so. Let's assume that is the case. Then I will build a device creating pseudorandom numbers out of such macroscopic pieces.
 
  • #104
Lord Jestocost said:
The point of "Superdeterminism" is simple: The initial conditions of the Universe were arranged that way that all measurements performed were and are consistent with the predictions of quantum mechanics.
...even though the actual underlying physical laws are completely different from quantum mechanics. In other words, the initial conditions were arranged so that we humans would be misled into inferring a completely wrong set of physical laws, which nevertheless make all the correct predictions about experimental results.
 
  • Like
Likes Lord Jestocost and DrChinese
  • #105
AndreiB said:
1. No, but he derives QM, so, implicitely, his model makes the same predictions as QM.2. This is the point of superdeterminism. You CANNOT pick the settings. They are determined by the initial state of the system. What you can pick is the initial state of the whole experiment, after that you cannot touch the experiment.

3. Yes, because you are part of the universe (the CA) and you have to obey the rules of the CA. The decisions you make are already "there" in the past state of the CA.

4. The "true match rate" you are speaking about is only expected when no interactions exist (like in the case of distant Newtonian billiard balls). If interactions exist, the "true match rate" has to be determined based on the initial state of the whole experiment, as explained in my post #93. So, you need to calculate the rate based on the CA rules, not based on Newtonian rigid-body mechanics. If 't Hooft's math is correct, the "true match rate" of his CA is the same as QM's one.
...

Everything you mention is a gigantic hand wave. Basically: assume that my conclusion is correct, and that proves my conclusion is correct.

1. In the 't Hooft reference, he does not derive QM in the 6 pages. And since there is no model presented, and no attempt to show why Bell does not apply, he certainly doesn't make any predictions. 2. But I DO pick the settings! The question I want answered is HOW my choice is controlled. If something forces me to make the choice I do, what is it and most importantly... WHERE IS IT? Is it in an atom in my brain? Or a cell? Or a group of cells? And what if I choose to make a choice by way of my PC's random number generator? How did the computer know to give me a number that would lead to my choice?3. This is basic stuff here, and simply saying "the universe made me do it" does not hold water. We are supposed to be scientists, and this is not science at all.4. You wouldn't need to jump through "superdeterministic ad hoc" rules if Bell didn't exclude all local realistic theories. Specifically, if the observed rate and the "true" rate both fell inside the classical range (>33% in my example) so that Bell Inequalities aren't violated. In case you missed it, "superdeterminsm" ONLY applies to Bell tests and the like. For all other physical laws (including the rest of QM), apparently the experimenter has completely free will. For example: tests of gravitational attraction, the speed of light, atomic and nuclear structures, etc.

BTW, the superdeterminism you are describing is contextual... and therefore violates local realism. Maintaining local realism is the point of superdeterminism in the first place. So that's a big fail. In case this is not clear to you why this is so: the SD hypothesis is that the experimenter is forced to make a specific choice. Why should that be necessary? If the true rate was always 25% (violating the Bell constraint), they there would be no need to force the experimenter to make such a choice that complies - any setting choice would support SD. Obviously, the true rate must be within the >33% region in my example to avoid contextuality issues.
 
  • Like
Likes mattt, weirdoguy and PeterDonis
  • #106
PeterDonis said:
...even though the actual underlying physical laws are completely different from quantum mechanics. In other words, the initial conditions were arranged so that we humans would be misled into inferring a completely wrong set of physical laws, which nevertheless make all the correct predictions about experimental results.

And those physical laws being different only with respect to Bell tests and the like. Apparently, the speed of light really is a constant with the observed value of c. And general relativity does not require humans to be misled, etc.
 
  • Like
Likes mattt
  • #107
Sunil said:
I store the seeds for some pseudorandom number generators in devices near A resp. B sufficiently isolated so that you can justify (with whatever means, not my problem) the independence assumption for this seed.
The independence assumption cannot be justified for the seed because our theory has long range interactions. It can be theoretically justified in the non-interacting case (Newtonian mechanics). So, you cannot isolate the seed.

Sunil said:
I know, a little bit unfair to combine here two different parts of your argumentation. But you have no choice here: Either you acknowledge that there are ways to make sure that there is independence - then I will use these ways to construct a Bell theorem test where we can make sure that there is independence of the decisions of the experimenters from the initial state of the pair.
There is independence only in those theories without long-range interactions. I agree that Bell's theorem rules them out.

Sunil said:
Or you cannot do it, then my point is proven that with superdeterminism statistical experiments are dead.
1. Statistical experiments are possible with superdeterminism. Using computer simulations to test for many initial states is a valid method to do them.

2.You can use independence where the variables you are looking for are not significantly impacted by long-range interactions. Newtonian mechanics is such an example, but also chemistry (EM interactions between distant molecules do not lead to a net energy transfer), biology and so on.

Sunil said:
And that's why there will be no correlation...
Can you provide any evidence for your claim? That a complex system cannot lead to correlations?
Sunil said:
If you doubt, make such computations yourself, with the largest amount of what you can do...
Nice try to shift the burden of proof. It's your job to provide evidence for your assertions.

Sunil said:
You will not find any correlations. Except for cases where it is possible to explain them in a sufficiently simple way.
I'm looking forward to see your calculations.

Sunil said:
Just to clarify what we are arguing here about. Looks like you want to argue that superdeterminism can be somehow restricted for macroscopic bodies if they are in sufficiently stable states or so.
No. Electrons are as stable as billiard balls. But electrons do interact at a distance, while billiard balls don't (if you neglect gravity). The electrons inside billiard balls do interact, but because you have the same number of positive and negative charges this interaction does not manifest itself as a net force on the balls. So, if you are only interested about the position/velocity of the balls you can assume they are independent for distant objects.

Sunil said:
Let's assume that is the case. Then I will build a device creating pseudorandom numbers out of such macroscopic pieces.
This does not work, because the independence only holds in a regime where Newtonian mechanics is a good approximation. The emission of EM waves is not described in such a regime.
 
  • #108
Lord Jestocost said:
The point of "Superdeterminism" is simple: The initial conditions of the Universe were arranged that way that all measurements performed were and are consistent with the predictions of quantum mechanics.
Can you please explain how did you arrived to that conclusion starting from my explanation:

"We repeat the simulation for a large number of initial states so that we can get a statistically representative sample. The initial states are chosen randomly, so no conspiracy or fine-tuning is involved." ?

And what does Big-Bang has to do with this?
 
  • #109
DrChinese said:
Everything you mention is a gigantic hand wave. Basically: assume that my conclusion is correct, and that proves my conclusion is correct.
Can you point out exactly where I assumed what I wanted to prove?

DrChinese said:
1. In the 't Hooft reference, he does not derive QM in the 6 pages. And since there is no model presented, and no attempt to show why Bell does not apply, he certainly doesn't make any predictions.
He presents the derivation in the 4'th reference:

Fast Vacuum Fluctuations and the Emergence of Quantum Mechanics
Found Phys 51, 63 (2021)
https://arxiv.org/pdf/2010.02019.pdf

At page 14 we find:

"The main result reported in this paper is that by adding many interactions of the form (4.1), the slow variables end up by being described by a fully quantum mechanical Hamiltonian Hslow that is a sum of the form (5.1)."

DrChinese said:
2. But I DO pick the settings! The question I want answered is HOW my choice is controlled. If something forces me to make the choice I do, what is it and most importantly... WHERE IS IT? Is it in an atom in my brain? Or a cell? Or a group of cells? And what if I choose to make a choice by way of my PC's random number generator? How did the computer know to give me a number that would lead to my choice?
Think about it in this way. Before the experiment you are in some state. this state is not independent of the state of the particle source, since the whole system must obey Maxwell's equations or whatever equations the theory postulates. This restricts the possible initial states, and, because the theory is deterministic, it also restricts your future decisions. The same constraints apply to your random number generator.

The hypothesis here is that the hidden variables that would violate QM are impossible to produce because there is no initial state that would lead to their generation. but I insist, I do not claim that this hypothesis is true, only that it can be true for some theories. So, we cannot dismiss them all, we need to check them one by one.

DrChinese said:
4. You wouldn't need to jump through "superdeterministic ad hoc" rules if Bell didn't exclude all local realistic theories.
He only excluded those without long-range interactions. If you disagree, please explain why the function I mention in post #93 necessarily has equal probabilities. If the probabilities are different, the "true" rate for that theory could have, in principle, any value.

DrChinese said:
Specifically, if the observed rate and the "true" rate both fell inside the classical range (>33% in my example) so that Bell Inequalities aren't violated.
By "classical" here you only include Newtonian mechanics. I am not aware of any calculation in the context of a long-range interacting theory, like classical EM, GR or fluid mechanics.

DrChinese said:
In case you missed it, "superdeterminsm" ONLY applies to Bell tests and the like. For all other physical laws (including the rest of QM), apparently the experimenter has completely free will.
Not at all. In most cases, nobody cares about the experimenter's free will. If an astronomer reports a supernova explosion, nobody cares if his decision to look in that direction was free. As long as he reports correctly what he observed it does not matter. The same is true for LIGO or LHC discoveries.

DrChinese said:
For example: tests of gravitational attraction, the speed of light, atomic and nuclear structures, etc.
You test gravitational attraction by looking at orbiting stars for example. Why should I care if the astronomer was free or not to look there? You measure the speed of light by bouncing a laser from the Moon. Why is the experimenter's freedom required?

DrChinese said:
BTW, the superdeterminism you are describing is contextual...
Yes, it is.

DrChinese said:
and therefore violates local realism.
It does not. You assume what you want to prove.

DrChinese said:
Maintaining local realism is the point of superdeterminism in the first place. So that's a big fail. In case this is not clear to you why this is so: the SD hypothesis is that the experimenter is forced to make a specific choice. Why should that be necessary?
It is required by the consistency conditions of the initial state. Impossible choices do not correspond to a valid initial state, so you cannot do them.

DrChinese said:
If the true rate was always 25% (violating the Bell constraint), they there would be no need to force the experimenter to make such a choice that complies - any setting choice would support SD.
Any physically possible choice supports SD. The choices that are not made are impossible because there is no valid initial state that could evolve into them.

DrChinese said:
Obviously, the true rate must be within the >33% region in my example to avoid contextuality issues.
Contextuality is what we want, there is no issue here.
 
  • #110
AndreiB said:
The independence assumption cannot be justified for the seed because our theory has long range interactions. It can be theoretically justified in the non-interacting case (Newtonian mechanics). So, you cannot isolate the seed.
? That means, once we have with gravity and EM long range interactions, your "independence assumption" can never be applied in our world? Ok, that means that in our world with superdeterminism statistical science is dead, not?
AndreiB said:
1. Statistical experiments are possible with superdeterminism. Using computer simulations to test for many initial states is a valid method to do them.
Computer simulations are computer simulations, not experiments. All you can do with them is to clarify what a theory predicts. A falsification requires experiments.
AndreiB said:
2.You can use independence where the variables you are looking for are not significantly impacted by long-range interactions. Newtonian mechanics is such an example, but also chemistry (EM interactions between distant molecules do not lead to a net energy transfer), biology and so on.
The variables I have to look for in the Bell experiment are the positions of the macroscopic detectors. They have to be independent from the preparation of the pair. This is all I need.

I guarantee this by having the seeds in macroscopic form in isolated rooms near the detectors, thus, not significantly impacted nor impacting anything outside the room in the initial phase. The rooms are opened only a short moment before the measurement itself happens, so Einstein causality (which holds for EM as well as gravity, thus, all the known long range forces) prevents an influence on the other side.
AndreiB said:
Can you provide any evidence for your claim? That a complex system cannot lead to correlations?
Yes. Namely the success of science based on classical causality. Which includes the common cause principle that correlations require causal explanations. Causal explanations in existing human science are quite simple explanations, they don't require anything close to computing even ##10^9## particles. If complex systems would regularly lead to nontrivial correlations there would be a lot of known violations of the common cause principle.
AndreiB said:
I'm looking forward to see your calculations.
I have no time and resources for meaningless computations which are known to give only trivial results, namely independence. Which anyway would prove nothing.
 
  • #111
PeterDonis said:
In other words, the initial conditions were arranged so that we humans would be misled into inferring a completely wrong set of physical laws, which nevertheless make all the correct predictions about experimental results.
Superdeterminism does not imply that QM is wrong, on the contrary. The whole point of SD is to reproduce QM, why would you want to reproduce a wrong theory? Indeed, EPR proves that QM cannot be fundamental (if we want to avoid non-locality), but, as a statistical approximation, is correct.
 
  • #112
Sunil said:
? That means, once we have with gravity and EM long range interactions, your "independence assumption" can never be applied in our world?
If those interactions are relevant for the variable of interest in the experiment, the independence assumption (IA) cannot be used. As repeatedly explained, we can ignore those forces in some situations but not in others.

Sunil said:
Ok, that means that in our world with superdeterminism statistical science is dead, not?
As proven by the simulation example, science is not dead.

Sunil said:
Computer simulations are computer simulations, not experiments. All you can do with them is to clarify what a theory predicts. A falsification requires experiments.
A computer simulation allows you to calculate the theoretical prediction for a specific test, like a Bell test. You compare that prediction with experiment in the normal way. You don't need the independence assumption to perform the experiment. you just do it.

Sunil said:
The variables I have to look for in the Bell experiment are the positions of the macroscopic detectors. They have to be independent from the preparation of the pair. This is all I need.
indeed.

Sunil said:
I guarantee this by having the seeds in macroscopic form in isolated rooms near the detectors, thus, not significantly impacted nor impacting anything outside the room in the initial phase. The rooms are opened only a short moment before the measurement itself happens, so Einstein causality (which holds for EM as well as gravity, thus, all the known long range forces) prevents an influence on the other side.
Again, this does not work. Say you have 2 balls and 1 electron. The position/momenta of the 2 balls can be assumed to be independent (since they are well described by Newtonian mechanics and no relevant long-range interaction is taken place - unless they are in space and gravity must be taken into account).

The "instantaneous" position/momentum of the electron is not independent of the 2 balls, since the electron "feels" the EM fields associated with the atoms in the balls. It will be accelerated back and forth in a sort of Brownian motion. Averaged for a long enough time, the trajectory of the electron would resemble the non-interacting one, since the EM forces cancel out on average.

Our hidden variable, however, depends on the exact state of the electron at the moment it "takes the jump", so it will not be independent of your macroscopic balls.

Sunil said:
Causal explanations in existing human science are quite simple explanations, they don't require anything close to computing even ##10^9## particles.
1. Really, have you seen a computer model for the formation of a planetary system from a cloud of gas and dust? How many particles are there? Sure, the calculation involves approximations, but Nature does the job without them. So, clearly, a very complex system of many interacting particles can lead to simple correlations, like those specific for a planetary system (same direction/plane of orbit, a certain mass distribution and so on.)

2. The existing explanations are simple because of limitations in computation power. Clearly, the more objects you include, the better the simulation approaches reality, not worse, as you imply. Metheorologists would love to have 10^26 data points and computers powerful enough to do the calculations. They are not restricted to 1point/Km because causality stops working for a higher resolution.

Sunil said:
If complex systems would regularly lead to nontrivial correlations there would be a lot of known violations of the common cause principle.
Why?

Sunil said:
I have no time and resources for meaningless computations which are known to give only trivial results...
Evidence please? (evidence for the triviality of the results, not for your lack of time)
 
  • #113
AndreiB said:
If those interactions are relevant for the variable of interest in the experiment, the independence assumption (IA) cannot be used. As repeatedly explained, we can ignore those forces in some situations but not in others.
For the variables of the first part of the experiment, the boxes containing the seeds are in no way relevant. There is a preparation of the state of photons, thus, no charge and no relevant gravity. The boxes are isolated.

In the second part, we use Einstein causality to show the irrelevance of the open boxes for the other device.

AndreiB said:
A computer simulation allows you to calculate the theoretical prediction for a specific test, like a Bell test. You compare that prediction with experiment in the normal way. You don't need the independence assumption to perform the experiment. you just do it.
You also need probability assumptions for the computer computation, say, for the choice of your initial values. And you need the independence assumption from everything else. Your ##10^{26}## particles are, last but not least, only a minor part of the universe.

By the way, your thought experiment simulation does not test superdeterminism. It simply tests a particular variant of usual theory, which assumes some distribution of the initial values, some equations of motion. Why you think it has any relation to superdeterminism is not clear.
AndreiB said:
Again, this does not work. Say you have 2 balls and 1 electron. The position/momenta of the 2 balls can be assumed to be independent (since they are well described by Newtonian mechanics and no relevant long-range interaction is taken place - unless they are in space and gravity must be taken into account).

The "instantaneous" position/momentum of the electron is not independent of the 2 balls, since the electron "feels" the EM fields associated with the atoms in the balls.
I will see how you will handle photons (which are used in most Bell tests). But my actual impression is that you will find another excuse for not allowing the Bell tests.
AndreiB said:
1. Really, have you seen a computer model for the formation of a planetary system from a cloud of gas and dust? How many particles are there?
As much as they were able to handle. I don't know and don't care. But without that computer simulation science would be as fine as it is today.

AndreiB said:
Sure, the calculation involves approximations, but Nature does the job without them. So, clearly, a very complex system of many interacting particles can lead to simple correlations, like those specific for a planetary system (same direction/plane of orbit, a certain mass distribution and so on.)
Simple correlations which have simple explanations.
AndreiB said:
2. The existing explanations are simple because of limitations in computation power. Clearly, the more objects you include, the better the simulation approaches reality, not worse, as you imply.
I don't imply this.
If complex systems would regularly lead to nontrivial correlations there would be a lot of known violations of the common cause principle.
AndreiB said:
Why?
Because each correlation between some preparation and later human decisions would be a correlation without causal explanation, thus, a clear violation of the common cause principle. If such a correlation would be observed, people would not ignore it, but would try very hard to get rid of it. With superdeterminism being correct and able to do what you claim - to lead to violations of the Bell inequalities in essentially all Bell tests - they would be unable to get rid of the correlation. (As they try hard to improve Bell tests.)

AndreiB said:
Evidence please? (evidence for the triviality of the results, not for your lack of time)
Learn to read, I have already given it.
 
  • #114
AndreiB said:
In medicine we have the problem that a lot of phenomena are not understood. The placebo effect simply means that the psychological state of the patient matters. Even if you cannot eliminate this aspect completely you could investigate the reason behind the effect and take that reason (say the presence of some chemicals in the brain) into account. Of course, this makes research harder, but it is also of a greater quality since you get a deeper understanding of the drug's action. In any case, it's not useless.
Independent of whether you can (or cannot) simply explain the origin of the placebo effect, the important realization is that you can experimentally verify its existence. And this possiblility to experimentally verify the violation of potentially unjustified independence assumptions is how Sabine Hossenfelder in 2011 came to seriously consider superdeterminism.
Once you have experimentally established the precense of an effect, it certainly makes sense to investigate the reason behind the effect.

And there is also another side of this coin: If you have a superdeterministic model, and it predicts that you have some chance to experimentally verify the presence of the violation of independence assumption, then you are in a totally different situation than for t'Hooft models. Understanding his points about high energy degrees of freedom might be worthwhile nevertheless, but not as a way to defend the possibility of superdeterminism. (And if you have an apparently superdeterminstic model, but the reasons why it doesn't allow to experimentally verify the presence of the violation of independence assumption are more subtle and more consistent than for t'Hooft models, then there is also the possibility that it is not really a superdeterministic model after all.)
 
  • #115
Sunil said:
For the variables of the first part of the experiment, the boxes containing the seeds are in no way relevant. There is a preparation of the state of photons, thus, no charge and no relevant gravity. The boxes are isolated.
In order to prepare the photons (classicaly EM waves) you need an electron to accelerate. The polarizations of those waves depend on the way the electron accelerated, itself dependent on the EM fields at that location. The fields are correlated with the global charge distribution, including your boxes. The boxes cannot be isolated. They are, after all, just large groups of charges, each contributing to the EM fields in which the experiment unfolds.

Sunil said:
You also need probability assumptions for the computer computation, say, for the choice of your initial values.
Indeed.

Sunil said:
And you need the independence assumption from everything else. Your ##10^{26}## particles are, last but not least, only a minor part of the universe.
This might be a problem, indeed. I see two ways out:

1. One might prove mathematically that beyond a certain N the statistics remain stable, so we can compute the prediction using the minimum number of particles that could model the experiment.
2. Since EM does not depend on scale you can test for Bell violations using macroscopic charges. For the reason presented earlier, these would be independent from the rest of the universe. I think this is actually doable in practice.

Sunil said:
By the way, your thought experiment simulation does not test superdeterminism. It simply tests a particular variant of usual theory, which assumes some distribution of the initial values, some equations of motion. Why you think it has any relation to superdeterminism is not clear.
It's because I think that any usual theory with long-range interactions is potentially superdeterministic. there is no particular "superdeterministic" assumption. Just take a deterministic theory with long-range and local forces (hence a field theory) and see what you get.

Sunil said:
I will see how you will handle photons (which are used in most Bell tests). But my actual impression is that you will find another excuse for not allowing the Bell tests.
Not at all, see above!

Sunil said:
Simple correlations which have simple explanations.
They are only simple because we developed statistical tools to deal with large number of particles. It is still the case that large number of interacting particles can lead to observable correlations. If large number of particles could "cooperate" to produce planetary systems, why would they not be able to also produce entangled states?

Sunil said:
Because each correlation between some preparation and later human decisions would be a correlation without causal explanation, thus, a clear violation of the common cause principle.
Just because you do not know the explanation does not mean there is none, so there is no violation of the common cause principle.

Sunil said:
If such a correlation would be observed, people would not ignore it, but would try very hard to get rid of it. With superdeterminism being correct and able to do what you claim - to lead to violations of the Bell inequalities in essentially all Bell tests - they would be unable to get rid of the correlation. (As they try hard to improve Bell tests.)
With SD they need not get rid of those correlations, since SD would provide the cause they are searching for. The SD explanation for a Bell test is of the same type as the explanation for why planets follow elliptical orbits. It's the N-body EM equivalent of the 2-body planet-star gravitational system.
 
  • #116
AndreiB said:
Since EM does not depend on scale you can test for Bell violations using macroscopic charges. For the reason presented earlier, these would be independent from the rest of the universe. I think this is actually doable in practice.
What could this possibly mean? It is rather obvious that you have never studied any of Aspect et al.'s papers.
It is not sufficient for your utterings to sound logical. They should also make sense.
 
  • #117
I will see how you will handle photons (which are used in most Bell tests). But my actual impression is that you will find another excuse for not allowing the Bell tests.

AndreiB said:
Not at all, see above!
Above I have found this:
AndreiB said:
In order to prepare the photons (classicaly EM waves) you need an electron to accelerate. The polarizations of those waves depend on the way the electron accelerated, itself dependent on the EM fields at that location. The fields are correlated with the global charge distribution, including your boxes. The boxes cannot be isolated. They are, after all, just large groups of charges, each contributing to the EM fields in which the experiment unfolds.
An excuse for not allowing the use of your independence assumption in Bell tests.

But let's look at the next excuse which you have to present for the experiment where the direction of the detectors are defined by starlight arriving shortly before the measurement from the other side than the particle measured at that detector. There was a real experiment with this. Instead of starlight, I would prefer CMBR radiation coming from this other side. So, the event which has created these photons has not been in the past light cone of the preparation of the pair.

BTW, if there is a singularity in the past - and according to GR without inflation, as well as to GR with inflation caused by a change of the vacuum state, there has to be a singularity - then there is a well-define and finite horizon of events which have a common event in the past with us. This horizon can be easily computed in GR, and in the BB without inflation it was quite small, so that the visible inhomogeneities visible in the CMBR where greater than this horizon size. This problem was named "horizon problem". Inflation solves it FAPP by making it greater than what we see in the CMBR. But it does not change the fact that those events we see in the CMBR coming from opposite sides are causally influenced by causes farther away in those directions, and all we have to do is to go so much far away searching for those causes that we will end up with causes in the opposite directions which have nothing in their common past. So, each of the two causes can influence (if Einstein causality holds) only one of the detectors, and not the preparation procedure.

But once I can modify, by modifying this external cause, only one detector setting, I have independent control over one detector setting.

As before, I'm sure you will find an excuse.
AndreiB said:
1. One might prove mathematically that beyond a certain N the statistics remain stable, so we can compute the prediction using the minimum number of particles that could model the experiment.
This is what normal science, with the rejection of superdeterminism, is assuming. The statistics remain stable, namely the interesting variables which do not have sufficiently simple causal explanations for their correlations will remain independent. Except that nobody hopes that one can prove this mathematically for a completely general situation. But for various pseudorandom number generators such independence proofs are known. I even remember to have seen the proof for the sequence of digits of ##pi##.
AndreiB said:
2. Since EM does not depend on scale you can test for Bell violations using macroscopic charges. For the reason presented earlier, these would be independent from the rest of the universe. I think this is actually doable in practice.
Ah, I see, this is what you have meant with "above". Nice trick, given that (I think) you know that it is quite difficult to prepare entanglement states for macroscopic bodies in a stable way.
AndreiB said:
It's because I think that any usual theory with long-range interactions is potentially superdeterministic.
So your claim that superdeterministic theories may be falsifiable is bogus. That you think about such potentiality does not make that theory superdeterministic.
AndreiB said:
there is no particular "superdeterministic" assumption. Just take a deterministic theory with long-range and local forces (hence a field theory) and see what you get.
You get what usual science assumes - independence if there is no causal justification for a dependence.

This independence assumption (the zero hypothesis) is clearly empirically falsifiable, and if it is falsified, then usual science starts to look for causal explanations. And usually finds it. ("Usually" because this requires time, so that one has to expect that there will always cases where the search was not yet successful.)
AndreiB said:
It is still the case that large number of interacting particles can lead to observable correlations.
This is as probable as that all the atoms of a gas concentrate themselves in one small part of the bottle.

This could be, in fact, another argument: I would expect states with correlations have lower entropy than states with zero correlations. Indeed, if the gas concentrates in the upper left corner of the bottle, there will be correlations between up-down and left-right directions, while in the homogeneous gas distribution where will be none. (Except, of course, for those forms of the bottle where the form alone already leads to a correlation of them also for the homogeneous distribution.)
AndreiB said:
If large number of particles could "cooperate" to produce planetary systems, why would they not be able to also produce entangled states?
The cooperation for planetary systems is already predicted by very rough approximations, and this prediction does not change even if we make rough approximation errors. But if we add white noise to variables with correlations, the correlations decrease.
AndreiB said:
Just because you do not know the explanation does not mean there is none, so there is no violation of the common cause principle.
False logic. If I don't know the explanation, there may be one. But it is as well possible that there is none, thus, a violation of the common cause principle. Your "so there is no" obviously does not follow.

We have the large experience of humankind with the successful application of the common cause principle. Essentially everybody identifies correlations in everyday life and then tries to find explanations. This fails if the correlations are not real, but statistical errors. But many times causal explanations will be found. If it would be violated in reality, it would have been detected long ago.
AndreiB said:
With SD they need not get rid of those correlations, since SD would provide the cause they are searching for.
Which is an euphemistic reformulation of my thesis that SD would be the end of science. There would be no longer any need to search for causal explanations of correlations.
 
  • #118
AndreiB said:
1. Can you point out exactly where I assumed what I wanted to prove? ... He presents the derivation in the 4'th reference:

Fast Vacuum Fluctuations and the Emergence of Quantum Mechanics
Found Phys 51, 63 (2021)
https://arxiv.org/pdf/2010.02019.pdf2. Think about it in this way. Before the experiment you are in some state. this state is not independent of the state of the particle source, since the whole system must obey Maxwell's equations or whatever equations the theory postulates. This restricts the possible initial states, and, because the theory is deterministic, it also restricts your future decisions. The same constraints apply to your random number generator.

The hypothesis here is that the hidden variables that would violate QM are impossible to produce because there is no initial state that would lead to their generation. but I insist, I do not claim that this hypothesis is true, only that it can be true for some theories. So, we cannot dismiss them all, we need to check them one by one.

3. Not at all. In most cases, nobody cares about the experimenter's free will. If an astronomer reports a supernova explosion, nobody cares if his decision to look in that direction was free. As long as he reports correctly what he observed it does not matter. The same is true for LIGO or LHC discoveries.

You test gravitational attraction by looking at orbiting stars for example. Why should I care if the astronomer was free or not to look there? You measure the speed of light by bouncing a laser from the Moon. Why is the experimenter's freedom required?4, [Superdeterminism is contextual] Yes, it is.

1. Why yes I can! Note that for 't Hooft's reference, he is quoting... himself! (Just as you seem to do.) And he claims this to be a derivation of QM. Wow, who knew you could And from his reference, which is about "fast" variables (which I am not aware as part of any standard model), he tells us:

"...we first assume the existence of very high frequency oscillations. These give rise to energy levels way beyond the regime of the Standard Model."

How about we assume the existence of very small turtles? "It's turtles all the way down..." Or how about we assume the universe was created last Thursday, and our memories of an earlier existence are false (Last Thursdayism). Basically: you can't reference another author referencing himself with a completely speculative set of hypotheses/assumptions that dismiss Bell, and then say "look what I proved".

2. I accept that any model, deterministic or not, has a restricted number of initial states. What is missing (among other things) is i) a causal connection between a) the entangled particle source(s) and b) the many apparati determining the measurement context; and ii) a viable description of the mechanism of how that causal connection between a) and b) coordinates to obey the quantum mechanical expectation values locally.

An important note about the sources of entangled particles per a) above. The entangled particles can come from fully independent laser sources, without having ever been present in the same light cone. Now, I am perfectly aware that within a purported Superdeterministic model, everything in the observable universe lies in a common light cone and therefore would be "eligible" to participate in the anti-Bell conspiracy. But now you would be arguing:

There exist particular photons, from 2 different lasers* that also end up being measured at the proper angles (context) for a Bell Inequality violation, but only when statistically averaged (even though there is complete predetermination of each individual case); and "somehow" these two photons each carry a marker of some kind (remember they have never been in causal contact - so the laser source must have passed on this marker to each photon at the time it was created) so that it "knows" whether to be up or down - but only in the specific context of that a measurement setting that can be changed midflight according to a pseudo-random generator for the setting - that can itself have involvement by any number of human brains and/or computers.

So, where did any of this get explained other than by a general purpose "suppose that"? Because there is no known physics - EM, QM or otherwise - that could serve as a base mechanism for any of the above to support the wild assertions involved in SD.3. I wasn't referring to free will in mentioned measurements of c, gravitiation, or any other constant. I was referring to the fact that the ONLY scenario where Superdeterminism is a factor is in Bell tests. Apparently, the universe is just fine at revealing its true nature without a need to "conspire" at everything else.

Imagine that we are measuring the mean lifetime of a free neutron as being 880 seconds. But then you tell me it's really 333 seconds. Your explanation is: It's just that the sample was a result of initial settings, and those initial settings led to an unfair sample. And by the way, that unfair sample always gives the same results: 880 seconds instead of 333 seconds. By analogy, that is much like the SD hypothesis that the local realistic value of my Bell test example must be at least .333, although the observed value is .250. Why do you need a conspiracy to explain the results of one scientific test, but no others?4. Glad you agree. A contextual theory is not realistic. According to EPR ("elements of reality"), there must be counterfactual values for all elements of reality that exist, regardless of whether or not you can measure them simultaneously. *See for example:
High-fidelity entanglement swapping with fully independent sources
https://arxiv.org/abs/0809.3991
 
  • Like
Likes mattt and weirdoguy
  • #119
AndreiB said:
Superdeterminism does not imply that QM is wrong
It does on any interpretation of QM except one that views QM as just a statistical model over some underlying deterministic physics, where the statistics and probabilities have the standard classical ignorance interpretation. The latter interpretation of QM seems to be one of the least popular ones.
 
  • Like
Likes vanhees71
  • #120
DrChinese said:
A contextual theory is not realistic. According to EPR ("elements of reality"), there must be counterfactual values for all elements of reality that exist, regardless of whether or not you can measure them simultaneously.
I disagree. Bohmian and other realistic interpretations of QM are, given that they give the QM predictions, necessarily contextual given the Kochen-Specker theorem.

And this can be also easily seen explicitly, given that the trajectories of the system are influenced by the trajectories of the measurement devices.
 
  • Like
Likes mattt, vanhees71 and gentzen

Similar threads

  • · Replies 114 ·
4
Replies
114
Views
7K
  • · Replies 16 ·
Replies
16
Views
1K
  • · Replies 27 ·
Replies
27
Views
2K
Replies
5
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 91 ·
4
Replies
91
Views
4K
  • · Replies 7 ·
Replies
7
Views
1K
  • · Replies 12 ·
Replies
12
Views
4K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
35
Views
731