Is this popular description of entanglement correct?

In summary, this conversation is discussing the popularized statement "If particle A is found to be spin-up, "we know that" particle B "has" spin-down." The speaker thinks that this statement is not always accurate because if the second particle is measured in a different direction then the first particle's spin may be different.
  • #71
PeterDonis said:
You appear to me to be claiming that superdeterminism can provide a valid explanation for the actual experimental results we have on measurements of entangled particles.
Good to know. I certainly did not intent to claim that. I did try to defend Hossenfelder and Palmer to a certain limited extent, namely that they do understand many important points related to superdeterminism. Not because I want to convince anybody, but simply because it is my honest opinion, and I don't want to lie. But because I have the strong impression that the word "superdeterminism" is heavily loaded with connotations, I also don't want to be drawn into discussions about it. (But I found it valid to reply with "Careful, ..." to a statement that went "I hate ... SOME STATEMENT IN ALL CAPS ...", because it exemplified exactly this loaded state of affairs.)

PeterDonis said:
Nothing in your responses has addressed anything I said about that.
Good to know. Then I will stop here. Thanks for trying to clarify to me what DrChinese really intended to say. Sorry that I have misunderstood his intention because he somehow "triggered" me.
 
Physics news on Phys.org
  • #72
Nullstein said:
Well, there is currently no generally satisfactory interpretation of quantum mechanics in the first place.

I do not think it is a sound scientific practice to state personal opinions as scientific facts. Much better to say there is no generally accepted interpretation of QM everybody agrees on. Some think many interpretations are simply a continuation of discussions about what probability is:
https://math.ucr.edu/home/baez/bayes.html

Yet strangely, interpretations of probability do not seem to generate much discussion. Most that use probability, such as actuaries, do not worry about it at all. Indeed, when I studied it, I was blissfully unaware of the issues. But then again, I did applied math. Pure math guys may go into it more.

Thanks
Bill
 
Last edited:
  • Like
Likes vanhees71 and gentzen
  • #73
AndreiB said:
let S0 be the microscopic state (position/momenta of charge particles+electric/magnetic fields) of the source at a certain (initial) time before the experiment. Let D10 and D20 be the corresponding microscopic states of the detectors. Since all charged particles interact, the hidden variable, lambda (the polarisations of the emitted EM waves at some later time) would be given by a very complicated function like:

lambda = f(S0,D10,D20).

So, lambda cannot be independent of either D10 or D20, it's a function of them.
Being a function is not in conflict with being independent. A good pseudorandom number generator gives you a pseudorandom number as function u of the seed s and the number of applications of the generator i. In principle it maybe simply defined by a function u(i+1,s) = F(u(i,s)) which gives widely different results even if only a single bit is changed. Nonetheless, for all the usual tests of randomness, these sequences will look like random sequences. In particular ##P(u(i+1,s)|u(i,s)\in[u_0,u_1]) = P(u(i+1,s))## if that part ##[u_0,u_1]## specifies only a minor part of the bits contained in the value ##u(i,s)## itself. So even explicit functions can lead to independence according to all the applicable statistical criteria.
AndreiB said:
I have no idea what correlations, if any, can be generated in this way.
And I tell you that to destroy remaining correlations is easy, adding a pseudorandom number will do the job. Instead, creating correlations is impossible. Observable correlations require causal explanations.

AndreiB said:
There is no need to posit any conspiracy. Interacting objects are not independent, this is the only point I am trying to make. Since one premise of Bell's theorem is not fulfilled, the conclusion does not follow. The conclusion might still be true, but not necessarily so.
It is a known property of conspiracy theories that one cannot reject them by pure logic. The basic principles of causality you have to accept too.

AndreiB said:
As explained, the above argument applies only to interacting systems. Even in a Bell test, the macroscopic settings of the detectors are independent parameters (since the interaction between their constituent particles cannot determine a macroscopic rotation of the device).
No. Microscopic causes can influence macroscopic detector settings and often do it. Without this, no quantum measurements would be possible at all. The experimenters can decide to use microscopic particles by design, say, by using a Geiger counter to decide what to measure. But even if they throw macroscopic dices the results can be influenced by microscopic turbulences which can be caused by even atomar causes.

AndreiB said:
So, you can assume independence in all experiments where the microscopic arrangement is not relevant, which includes almost everything except Bell tests and a few other quantum experiments.
I disagree. There is no experiment with statistical outcomes where the outcome does not depend on microscopic causes too.

AndreiB said:
I also think that the importance of the independence assumption is greatly exaggerated. Most experiments do not depend on it.
I disagree. There is no experiment with statistical outcomes which could not be explained away if one cannot make an independence assumption.
AndreiB said:
It's not about inventing anything. You analyze the situation and determine, based on what we know, what is independent and what is not. It's a scientific, objective criteria.
Using your way of reasoning in the microscopic world the conclusion will be simple - nothing is independent. So, no analysis is really necessary, the result will be all the same.
 
  • #74
DrChinese said:
The error is asserting A causes B in a Bell test. Everyone knows that the relative ordering of Alice/Bob measurements has no observable effect on the outcome.
It follows only that as the hypothesis ##A\to B##, as the hypothesis ##B\to A## is viable given the outcome. I don't understand why this would make one of the two possible causal explanations an error.
DrChinese said:
An actual theory of Superdeterminism would necessarily have so much baggage, it would be easier to believe in God by way of Occam's Razor. Neither of which would make much sense for a quantum theory.
Good point.
 
  • #75
PeterDonis said:
But you can talk about QM and experimental results without talking about hidden variables at all. Hidden variable models are not the only possible models. You can even talk about the fact that QM/experimental results violate the Bell inequalities without talking about hidden variable models.
The problem is that we have the EPR argument, a form of it being presented in my #7 post. This leaves you with two options: non-locality (in the sense that A causes B even if A and B are space-like) and hidden variables. Rejecting hidden variables necessarily implies non-locality. I am not saying it's wrong but I still think that locality is the most reasonable option, hence hidden variables are the most reasonable option.

PeterDonis said:
If you are really unable to see the obvious proof, consider: EM is a local hidden variable model in the sense that Bell's theorem uses that term. (So is classical General Relativity.) Therefore, by Bell's theorem, its predictions must satisfy the Bell inequalities.
My whole point is that EM has not been shown to obey the statistical independence requirement. Without statistical independence, Bell's conclusion does not follow.

PeterDonis said:
If you define "theory of relativity" to only include classical relativity, then you have excluded quantum field theory. In which case your definition of "theory of relativity" is irrelevant to this discussion.
Relativity is about the space-time structure. There is no quantum theory of space-time. QFT is just an example of physical theory using the SR background in the same way non-relativistic QM uses the Newtonian background.

PeterDonis said:
No, let's define what "A caused B" means in terms of testable predictions. Otherwise it's just meaningless noise as far as physics is concerned. Can you do that?
It's a reductio ad absurdum argument. If you assume, for the sake of the argument that it is the case that A caused B you get into some unpleasant consequences, like the requirement of defining an absolute reference frame. If you don't like those consequences you must deny the premise (A caused B) which, given the EPR argument presented in my #7 post, necessary implies the existence of hidden variables.

I do not think that A causes B, so I take the hidden variable route.
 
  • #76
Nullstein said:
Andrei is right about the fact that classical EM can in principle violate Bell's inequality. One just has to fine-tune the initial conditions of the full system accordingly.
Why do you need to fine-tune the initial conditions?
 
  • #77
DrChinese said:
1. Your conclusion is incorrect. QFT does not specify a cause-effect relationship, and so nothing is inconsistent. The error is asserting A causes B in a Bell test.
It was a reply to PeterDonis. He seemed to be fine with a causal relationship between space-like events so I presented him a reductio ad absurdum argument. I do not actually argue that A causes B. However, QFT does not forbids it either.

DrChinese said:
Everyone knows that the relative ordering of Alice/Bob measurements has no observable effect on the outcome.
I never said it does.

DrChinese said:
2. Earlier, you mentioned long range EM effects (which presumably are bound by c). Bell tests have been performed where the measurement settings are changed midflight so that there is no possibility of EM effects between the angle settings of the detection systems. And it is in fact those settings which determine the statistical outcomes in Bell tests.
EM is a deterministic theory. In order to be able to change "midflight" you need to start with a initial state that necessarily determines you to make that change in the way you make it and at that specific time you make it. That initial state also determines the hidden variable (because of the long-range interactions) so one cannot establish those variables to be independent.

DrChinese said:
3. I hate it when Superdeterminism is brought up. There is NO THEORY/INTERPRETATION OF SUPERDETERMINISM that explains Bell test results.
Well, t'Hooft disagrees with you:

Explicit construction of Local Hidden Variables for any quantum theory up to any desired accuracy​

https://arxiv.org/abs/2103.04335

DrChinese said:
Superdeterminism a general idea, as specific as using the term "God".
No, it's a very clearly defined idea. The hidden variable and the settings of the detectors are not statistically independent variables.

DrChinese said:
You can just as easily say God picks the individual outcomes of Bell tests, and that therefore nature can be local deterministic.
Of course you can. But just because you can imagine stupid superdeterministic theories does not mean that all such theories are necessarily stupid.

DrChinese said:
You aren't explaining anything.
I am following the logically available options. In the end we will see if an explanation would emerge or not. What I can tell you with certainty is that without hidden variables there is no local explanation of EPR correlations.

DrChinese said:
An actual theory of Superdeterminism would necessarily have so much baggage, it would be easier to believe in God by way of Occam's Razor.
What evidence do you have for this assertion?
 
  • Like
Likes gentzen
  • #78
Sunil said:
Being a function is not in conflict with being independent. A good pseudorandom number generator gives you a pseudorandom number as function u of the seed s and the number of applications of the generator i. In principle it maybe simply defined by a function u(i+1,s) = F(u(i,s)) which gives widely different results even if only a single bit is changed. Nonetheless, for all the usual tests of randomness, these sequences will look like random sequences. In particular ##P(u(i+1,s)|u(i,s)\in[u_0,u_1]) = P(u(i+1,s))## if that part ##[u_0,u_1]## specifies only a minor part of the bits contained in the value ##u(i,s)## itself. So even explicit functions can lead to independence according to all the applicable statistical criteria.
Indeed, SOME functions would still allow for independence. But some other functions would not. So, depending on the function you have, Bell's theorem applies or not, which means that you cannot a-priori assume that a certain local theory with long-range interactions is ruled out. You need to compute the relevant function and see if the hidden variables are independent or not on the measurement settings.

in the case of classical EM for example, do you have any evidence about what that function looks like?

Sunil said:
And I tell you that to destroy remaining correlations is easy, adding a pseudorandom number will do the job. Instead, creating correlations is impossible. Observable correlations require causal explanations.
You cannot "add a pseudorandom number" to the function. The function is defined by the structure of the theory. In the case of EM, the states must satisfy Maxwell's equations.

Sunil said:
It is a known property of conspiracy theories that one cannot reject them by pure logic.
Since my argument does not depend on any special choice of initial state, I find the discussion about "conspiracy" a red-herring. You are free to choose whatever initial state you want, as long as such a state is physically possible (obeys Maxwell's equations or whatever equations the theory under investigation has). then your hidden variable would be related to the measurement settings by some function that follows from the mathematical structure of the theory. Where is the conspiracy?

Sunil said:
No. Microscopic causes can influence macroscopic detector settings and often do it.
Sure, but not always. You have independence in those situations when the microscopic state does not influence the macroscopic state.

Sunil said:
I disagree. There is no experiment with statistical outcomes where the outcome does not depend on microscopic causes too.
The statistics of a coin flip do not depend on the charge distribution of the electrons and nuclei inside the coin.

The trajectory of a billiard ball depends on the initial position and momentum, not on the microscopic configuration of its internal charges.

Fluid mechanics does not depend on the exact arrangement of the molecules.

The efficacy of a treatment does not depend on the exact arrangement of the drug molecules.

Sunil said:
I disagree. There is no experiment with statistical outcomes which could not be explained away if one cannot make an independence assumption.
Not all experiments have statistical outcomes.

Sunil said:
Using your way of reasoning in the microscopic world the conclusion will be simple - nothing is independent.
Indeed, in the microscopic world nothing is independent. It's a well known implication of QM. It's called contextuality.

Sunil said:
So, no analysis is really necessary, the result will be all the same.
Yes, an analysis is necessary, since not all experiments depend on the microscopic states. The billiard balls on a table do not have independent microscopic states. But this is irrelevant if I study their collisions.
 
  • #79
AndreiB said:
The problem is that we have the EPR argument, a form of it being presented in my #7 post. This leaves you with two options: non-locality (in the sense that A causes B even if A and B are space-like) and hidden variables. Rejecting hidden variables necessarily implies non-locality.
That's the EPR argument. (Today you have to add some more rejections of complete nonsense like superdeterminism).
AndreiB said:
I am not saying it's wrong but I still think that locality is the most reasonable option, hence hidden variables are the most reasonable option.
You cannot save locality with hidden variables, that's Bell's theorem.
 
  • #80
Sunil said:
That's the EPR argument. (Today you have to add some more rejections of complete nonsense like superdeterminism).
It so happens that you cannot win an argument by simply labeling your oponent's point "nonsense". If this is all you have to say, I can relax and drink my coffee.

Sunil said:
You cannot save locality with hidden variables, that's Bell's theorem.
In principle you can. just because you call the solution "nonsense" doesn't make it so.
 
  • #81
AndreiB said:
Indeed, SOME functions would still allow for independence. But some other functions would not.
Those who are so simple and regular that a causal explanation can be found.
AndreiB said:
So, depending on the function you have, Bell's theorem applies or not, which means that you cannot a-priori assume that a certain local theory with long-range interactions is ruled out.
There are, of course, such experimental configurations where you cannot exclude that a causal explanation for correlations between the experimenters decisions and the prepared state. In medicine, studies which are not double-blind come to mind.

Science depends on the possibility to make independent choices for experiments. That there may be also bad experiments where correlations appear because of design errors is harmless.

AndreiB said:
in the case of classical EM for example, do you have any evidence about what that function looks like?
The classical Maxwell equations have a limiting speed, namely c. This is sufficient to prove the Bell inequalities for space-like separated measurements.
AndreiB said:
You cannot "add a pseudorandom number" to the function. The function is defined by the structure of the theory. In the case of EM, the states must satisfy Maxwell's equations.
You can use some value to define the actual orientation of the device so that this turning cannot causally influence the initial state. And to this value of the turning angle (which may depend on whatever, no problem) you can add that pseudorandom number.
AndreiB said:
Since my argument does not depend on any special choice of initial state, I find the discussion about "conspiracy" a red-herring.
Whatever you name it, it does not matter. You need a consistent pattern of correlations in all Bell experiments, while you are not even able to identify correlations in simple pseudorandom number generators where everything is known and solvable on every computer.

AndreiB said:
You are free to choose whatever initial state you want, as long as such a state is physically possible (obeys Maxwell's equations or whatever equations the theory under investigation has). then your hidden variable would be related to the measurement settings by some function that follows from the mathematical structure of the theory. Where is the conspiracy?
The conspiracy is that this "related" translates into a correlation. This happens to be only in exceptionally simple circumstances - those circumstances where even we human beings are usually able to identify causal explanations. And already in quite simple pseudorandom number generators you have de facto no chance.
AndreiB said:
Sure, but not always. You have independence in those situations when the microscopic state does not influence the macroscopic state.
Possible, but why should we care about the possibility to make bad design?
AndreiB said:
The statistics of a coin flip do not depend on the charge distribution of the electrons and nuclei inside the coin.
Not sure but plausible. But there are also all the gas particles in the air, and the atoms of the body of the guy who throws the coin.
AndreiB said:
The trajectory of a billiard ball depends on the initial position and momentum, not on the microscopic configuration of its internal charges.
But on those of the billiard player. And the air.
AndreiB said:
Fluid mechanics does not depend on the exact arrangement of the molecules.
Sure? The equations of continuum mechanics not, by definition. But if there is turbulence, minor distortions will heavily increase in size. Not much turbulence is necessary to make the dependence
AndreiB said:
The efficacy of a treatment does not depend on the exact arrangement of the drug molecules.
No, but it depends on the knowledge about who has got the real medicine and who has got the placebo if it acts like a placebo. Some aspects of an experiment - like those in your examples - can be more or less completely controlled, so that the remaining uncertainty does not matter, others cannot. And in real experiments you always have to assume that some of the aspects which you cannot control are independent of what really matters in your experiment.
AndreiB said:
Not all experiments have statistical outcomes.
One can, of course, reduce the possible outcomes to a discrete number of large subsets, and then it is possible that there will be only a single one, so no statistics involved. But these are exceptions, not the rule. Measurement errors are usually sufficient to force you to use statistics.
AndreiB said:
Indeed, in the microscopic world nothing is independent. It's a well known implication of QM. It's called contextuality.
Nothing special for QM. Contextuality is also common in human interactions.
AndreiB said:
Yes, an analysis is necessary, since not all experiments depend on the microscopic states. The billiard balls on a table do not have independent microscopic states. But this is irrelevant if I study their collisions.
If you reduce science to experiments with deterministic outcomes, so that no statistics is necessary, not much remains. If you have statistics, it is difficult to live without any independence assumptions. I would allow for some exceptions - but these would be exceptions, not the rule.
 
  • Like
Likes gentzen
  • #82
AndreiB said:
He seemed to be fine with a causal relationship between space-like events
I said no such thing. All I said was that QFT does not say anything about "causal relationships" at all. It just says that spacelike separated measurements commute. You are the one who keeps harping on "causal relationships" without being able to give any testable definition of the concept.
 
  • #83
AndreiB said:
1. Well, t'Hooft disagrees with you:

Explicit construction of Local Hidden Variables for any quantum theory up to any desired accuracy​

https://arxiv.org/abs/2103.043352. No, it's a very clearly defined idea. The hidden variable and the settings of the detectors are not statistically independent variables.

1. In this paper*, there is no model presented that explains Bell correlations.

As always, I challenge anyone (and especially 't Hooft) to take the DrChinese challenge for their local realistic (or superdeterministic as the case may be) model. If there are values for various choices of measurement angles (which I choose, or think I choose), what are they for angle settings 0/120/240 degrees (entangled Type I PDC photon pairs)? The challenger provides the results, I pick the angle settings. According to the 't Hooft hypothesis, I will always pick pairs that yield the correct quantum expectation value.

How is it that, I sitting here at a remote keyboard, am forced to select angle pairs that come out to a 25% "apparent" match rate when the "true" match rate - according to local realism - is over 33%? But hey, if it works, I will gladly acknowledge a winner. Always looking for takers to the DrChinese challenge. :smile:2. Clearly defined? Exactly how are Alice and Bob's apparently independent choice of settings tied to the measurement outcomes? This is the crucial detail always skipped over. What quantum effect causes human brains to make the choices to precisely yield the grossly misleading value predicted by QM? Because I missed that section of the "model"...There is no model of "Superdeterminism" that is any more specific than saying "God made me choose these measurement settings".*All gigantic claims and hand-waving, no proofs or meaningful examples. I am well aware of 't Hooft's deserved reputation, but that does not give him a pass in this area. No serious researcher in the area of Bell entanglement would cite these papers as accepted or useful science. This paper has been cited by 1 paper, by D. Dolce. "Prediction of Unified New Physics beyond Quantum Mechanics..." has 34 references, 17 to his own papers - draw your own conclusion.
 
  • Like
  • Love
Likes mattt, Lord Jestocost, bhobba and 1 other person
  • #84
DrChinese said:
There is no model of "Superdeterminism" that is any more specific than saying "God made me choose these measurement settings".
That's your claim, but can you prove it? There are several models presented in https://arxiv.org/pdf/1511.00729.pdf and in the conclusion, the author says:
Given that standard quantum mechanics satisfies statistical locality and measurement independence, Occam’s razor suggests that it is the intuition behind determinism (and thus statistical completeness) that must be given up. On the other hand, it may be argued that relaxing measurement dependence is relatively far more efficient: only 1/15 of a bit of measurement dependence is required to model the singlet state, in comparison to 1 bit of communication in nonlocal models, and 1 bit of shared randomness in nondeterministic models [11].
I don't disagree that superdeterminism is a peculiar and unlikely solution to the problem, but one has to admit that non-locality is just as peculiar and thus, one shouldn't delegitimize research in this direction. I do believe that hidden variable theories are the wrong approach in general (independent of whether they are non-local or superdeterministic), but in science we shouldn't judge an approach based on our personal preferences. There's no reason to be harsh on people who study these approaches in a rigorous, scientific way.
 
  • #85
Nullstein said:
I don't disagree that superdeterminism is a peculiar and unlikely solution to the problem, but one has to admit that non-locality is just as peculiar
No. Non-locality is a minor problem, science has lived and developed without problems during the time gravity has been described by non-local Newtonian theory. It is not even very peculiar, because non-locality is the natural limit of a very large speed of causal influences. So, if the speed of causal influences is too fast to be measured, we have to expect non-locality.

Superdeterminism is, instead, the ultimate conspiracy theory and, if taken seriously, the end of any statistical experiment.
Nullstein said:
and thus, one shouldn't delegitimize research in this direction. There's no reason to be harsh on people who study these approaches in a rigorous, scientific way.
Criticism of wrong approaches is a necessary part of science.

Of course, it would be preferable if independence of science would return, so that scientists would be free to do research in whatever direction without the certainty that they will not get the next grant if their research direction is not liked by the mainstream. But there is nothing one can do against this, independence of science is a nice memory of the past, such is life.

But it does not follow that, as some sort of compensation, one should not criticize those who support nonsensical research directions for this, and not even that one should not use harsh words to name such research directions. I think such criticism, however harsh, is much better than ignorance.

Which is, unfortunately, the way outsiders are handled today by the mainstream. Criticizing outsiders is also nothing favored by the mainstream, those who start this also risk to lose the job once the next grant is needed.
 
  • #86
AndreiB said:
just because you call the solution "nonsense" doesn't make it so.
Of course. One has to give counterarguments against the arguments provided by those who propose nonsense. As I have done, and will continue to do. But this will not prevent me from naming superdeterminism nonsense. Because it is - I continue to claim that superdeterminism, if taken seriously, would be the end of science. There would be no way to distinguish it from astrology.
 
  • #87
Sunil said:
No. Non-locality is a minor problem, science has lived and developed without problems during the time gravity has been described by non-local Newtonian theory. It is not even very peculiar, because non-locality is the natural limit of a very large speed of causal influences. So, if the speed of causal influences is too fast to be measured, we have to expect non-locality.
I explained in post #60 why it's not even the speed of the interaction that is most peculiar about it.

Moreover, non-locality and superdeterminism are quite similar, because in a completely deterministic theory, the non-local cause in the present can be evolved back into the past. If Alice turning the knob has caused Bob's particle's properties to change, then there is some event in the past that has caused Alice to turn the knob in the first place, which can be taken to be the ultimate cause. Similarly you can evolve back the event of Bob's particle's properties changing. Since everything had the chance to interact during the Big Bang, at some point in the far away past, there will be some overlap between the two evolved back events that could act as a superdeterministic common cause (i.e. conditioning on it will violate statistical independence).

Sunil said:
Superdeterminism is, instead, the ultimate conspiracy theory and, if taken seriously, the end of any statistical experiment.
Not a worse conspiracy than non-locality. And it's not true that it would be the end of any statistical experiment. One can still come up with theories, some of them being superdeterministic, and then compare them to the data. In the end, one picks the one which explains the data in the most elegant way and that may or may not be a superdeterministic theory.
Sunil said:
Criticism of wrong approaches is a necessary part of science.
Criticism should be based on arguments though and not on personal preferences or even polemics.
Sunil said:
But it does not follow that, as some sort of compensation, one should not criticize those who support nonsensical research directions for this, and not even that one should not use harsh words to name such research directions. I think such criticism, however harsh, is much better than ignorance.
Calling something "nonsensical" is not an acceptable form of criticism and being harsh isn't either. As I said, only arguments should matter.
 
  • #88
Nullstein said:
I explained in post #60 why it's not even the speed of the interaction that is most peculiar about it.
I find there
In fact, the knob on Alice's polarizer must, when turned, somehow magically have to capability to modify Bob's particle despite the fact that it was never built specifically to have this capability. Why doesn't it manipulate any other particle in the universe as well?
The polarizer interacts completely non-magical with the particle which is in Alice' laboratory. That particle was prepared together with Bob's particle, in a very special state. If there is no such special history of relations, there is no entanglement and no non-local influence. (The history of these special relations may be quite complex, but without any such history there is nothing.)
Nullstein said:
Moreover, non-locality and superdeterminism are quite similar, because in a completely deterministic theory, the non-local cause in the present can be evolved back into the past. If Alice turning the knob has caused Bob's particle's properties to change, then there is some event in the past that has caused Alice to turn the knob in the first place, which can be taken to be the ultimate cause. Similarly you can evolve back the event of Bob's particle's properties changing. Since everything had the chance to interact during the Big Bang, at some point in the far away past, there will be some overlap between the two evolved back events that could act as a superdeterministic common cause.
You forget that not every common bit of information in the past gives a correlation. And that there is a well-defined math to control for some common cause which can tell you, for example, that only some part of the correlation is explained by this common cause but a remaining part not.

In dBB theory the non-local influences are precisely described mathematically. They are quite restricted, and the influence can be computed. Superdeterminism has no mathematics which allows to compute something. Don't forget that mathematicians have proven a lot of things that some pseudorandom sequences survive many tests for randomness. And randomness is what has to be expected if the dependencies become too complex. As they necessarily become in superdeterminism.

That there exist some possible common causes ##C\to A, C\to B## in the past is usually nothing that matters - it matters only if their influence on ##A## resp. ##B## is strong enough to explain the correlation. You need ##P(A)\neq P(A|C)## or ##P(B)\neq P(B|C)##, and the difference should not be some negligible epsilon, but enough to explain the observable correlation ##P(AB)\neq P(A)P(B)##.
Nullstein said:
Not a worse conspiracy than non-locality. And it's not true that it would be the end of any statistical experiment. One can still come up with theories, some of them being superdeterministic, and then compare them to the data.
No. No statistical experiment can give you anything if you are not allowed to assume independence as the default assumption if no correlation is known (zero hypothesis). You have looked in a nasty way at the device. This is independent, and therefore does not change anything? Sorry, no, this nasty look has superdeterministically caused whatever.

Nullstein said:
In the end, one picks the one which explains the data in the most elegant way and that may or may not be a superdeterministic theory.
It cannot be a superdeterministic theory because such a theory cannot predict anything. Because to predict something you always have to make a lot of simplifying assumptions, and that most of the things of the universe are independent from what it studied is what you always have to assume.

Or you go back to astrology. But given that the positions of the planets are, of course, visible to you and can therefore influence you, as well as your date of birth, astrology is much more compatible with science than superdeterminism.
Nullstein said:
Calling something "nonsensical" is not an acceptable form of criticism and being harsh isn't either. As I said, only arguments should matter.
Ok, feel free to ignore that I name superdeterminism nonsense. And care about my arguments which I propose too. It remains nonsense and I will continue to name it nonsense.
 
  • #89
Sunil said:
The polarizer interacts completely non-magical with the particle which is in Alice' laboratory. That particle was prepared together with Bob's particle, in a very special state. If there is no such special history of relations, there is no entanglement and no non-local influence. (The history of these special relations may be quite complex, but without any such history there is nothing.)
The interaction between Alice's polarizer and Alice's particle is plausible and non-magical, because they are in direct contact with each other. However, Alice's polarizer is not in direct contact with Bob's particle and neither in direct contact with the TV in the livingroom. So why would it be plausible that her polarizer can modify Bob's particle but not turn on the TV in the livingroom? Entanglement is just a statistical feature of the whole population of identically prepared systems and not a property of the individual particles. For example, if the measurement axes are not aligned, the correlation may be only 10%, so it can't be an individual property of the particles and just shows up in the statistics of the whole ensemble.
Sunil said:
You forget that not every common bit of information in the past gives a correlation. And that there is a well-defined math to control for some common cause which can tell you, for example, that only some part of the correlation is explained by this common cause but a remaining part not.
No I didn't forget that. I've shown you a recipe to construct a common cause event in the past given two events in the present under the assumption that they are linked by a non-local cause-and-effect relationship and the assumption that they had the chance to interact at some time in the past (as would be the case in a Big Bang scenario). This common cause satisfies the required conditional probability relations. Under the given assumptions, the existence of a non-local explanation implies the existence of a superdeterministic explanation.
Sunil said:
No. No statistical experiment can give you anything if you are not allowed to assume independence as the default assumption if no correlation is known (zero hypothesis). You have looked in a nasty way at the device. This is independent, and therefore does not change anything? Sorry, no, this nasty look has superdeterministically caused whatever.
A superdeterministic theory can make statistical predictions which can be falsified, just like any other theory. The situation is not worse than in any other hidden variable theory. We cannot falsify the claim that there are hidden variables, but we can draw conclusions from the theory and falsify it, if the predictions don't match the experiment.
Sunil said:
It cannot be a superdeterministic theory because such a theory cannot predict anything.
A superdeterministic theory can of course predict something. For example, the paper I cited earlier describes superdeterministic theories that predict the Bell correlations and the theories would have to be rejected if the predictions didn't match the experiment.
 
  • #90
Sunil said:
No. Non-locality is a minor problem,

As Peter keeps correctly pointing out, statements like the above depend on what you mean by locality. Also, it expresses a personal reaction that is different to scientific fact. People do it all the time, but it is wise to realize what is going on. As shown by the many discussions here, Bell can be 'confounding' to many. Yes, I have changed my views on it several times since posting here for over ten years now. Bell is airtight, and its experimental confirmation is one of the outstanding achievements of 20th Century science, but how you view it does lead to subtleties. Superdeterminism is IMHO the last gasp of those that, for some reason, want an out - but again, just my opinion. I am a bit surprised by 't Hooft, whose How To Be A Good Theoretical Physicist IMHO is a gem - up there with the Feynman Lectures, Landau Theoretical Minimum and Susskind's Theoretical Minimum
https://www.goodtheorist.science/qft.html

Thanks
Bill
 
  • Like
Likes gentzen
  • #91
Nullstein said:
The interaction between Alice's polarizer and Alice's particle is plausible and non-magical, because they are in direct contact with each other. However, Alice's polarizer is not in direct contact with Bob's particle and neither in direct contact with the TV in the livingroom. So why would it be plausible that her polarizer can modify Bob's particle but not turn on the TV in the livingroom?
Alice' polarizer influences the configuration of the Alice' particle, via direct contact. Alice' particle interacts with Bobs particle using entanglement created by the preparation. For the details, look at the Bohmian velocity.

Nullstein said:
Entanglement is just a statistical feature of the whole population of identically prepared systems and not a property of the individual particles.
The "just" in "just a statistical feature" is your interpretation. The preparation procedure was applied to all individual particles. So, it can lead (and does lead) to shared behavior.

Nullstein said:
For example, if the measurement axes are not aligned, the correlation may be only 10%, so it can't be an individual property of the particles and just shows up in the statistics of the whole ensemble.
This argument makes no sense to me.
Nullstein said:
No I didn't forget that. I've shown you a recipe to construct a common cause event in the past given two events in the present under the assumption that they are linked by a non-local cause-and-effect relationship and the assumption that they had the chance to interact at some time in the past (as would be the case in a Big Bang scenario). This common cause satisfies the required conditional probability relations.
Because you say so?

Nullstein said:
Under the given assumptions, the existence of a non-local explanation implies the existence of a superdeterministic explanation.
Given that a "superdeterministic explanation" can be given to everything (which shows that it has nothing in common with an explanations) this is a triviality not worth to be mentioned.
Nullstein said:
A superdeterministic theory can make statistical predictions which can be falsified, just like any other theory. The situation is not worse than in any other hidden variable theory. We cannot falsify the claim that there are hidden variables, but we can draw conclusions from the theory and falsify it, if the predictions don't match the experiment.
No, no superdeterministic theory can make falsifiable predictions.

Of course, this depends on what one names a superdeterministic theory. A theory which claims that some experimenters have cheated and, instead of using really random preparation, used knowledge about some initial data has also a correlation between the experimenters decision and the initial data. If you name this a "superdeterministic theory", then, indeed, a "superdeterministic theory" can be a normal falsifiable theory. But in this case, there is a simple straightforward prescription how to falsify it: Use a different method of doing the experimenters decision. Say, add a pseudorandom number generator and use the number to modify the decision. If the effect remains unchanged, then this particular conspiracy theory is falsified. And given this possibility, for the discussion of Bell's inequality, theories of this type are obviously worthless, because the effect is already known to appear in very many different methods of making the experimenters choices.

For me, a superdeterministic theory requires more, namely that it goes beyond the simple "cheating experimenters" theory. That means, it has to explain a correlation between initial data and experimenters decisions for every prescription of how the experimenters prescription has been made. Which, for the start, includes a combination of the output of pseudorandom number generators, light sources from the opposite site of the universe, and some simple Geiger counter. Note also that the correlation should be always present, and have a large enough size, enough to allow the observed quantum violations of the BI.
 
  • Like
Likes mattt
  • #92
Non-locality is a minor problem
bhobba said:
As Peter keeps correctly pointing out, statements like the above depend on what you mean by locality.
No problem, in this case I talk about Bell locality.
bhobba said:
Also, it expresses a personal reaction that is different to scientific fact.
Of course, if I look at the facts I have presented and make a conclusion, this conclusion is a personal reaction. This does not make it non-objective or so. If I compute 2+2=4, this is also only my personal computation, I can make errors in my computations. So what?

That science, and in particular the theory of gravity together with its applications in astronomy, developed successfully in the time of non-local Newtonian gravity is a fact. So we have empirical evidence from history of science that science works nicely if based on non-local theories.

The point that non-local theories appear in a natural way as limits of local theories if the limiting velocity of causal influences goes to infinity is also a quite simple fact. That means, all one needs to have a non-local theory being viable is that that limiting velocity is too large to be measured.

Both points cannot be made for superdeterminism. Given that it claims that something assumed to be zero by normal causal theory can be nonzero prevents superdeterminism being a limit of normal causal theories. And my claim remains that superdeterminism, if taken seriously (instead of being ignored everywhere except in the particular case of the BI violations to get rid of the non-locality which would destroy relativistic metaphysics) would be the end of science.
bhobba said:
Bell is airtight, and its experimental confirmation is one of the outstanding achievements of 20th Century science, but how you view it does lead to subtleties.
I try to view it without following relativistic prejudices, that's all. But relativistic prejudices are quite strong feelings. Which is what has to be expected, given that those who study physics plausibly have been impressed by Einstein's argumentation for relativity already during their childhood. If they have not been impressed by this argumentation and liked it, they would hardly have decided to study physics. Such childhood impressions, if not questioned and rejected during the youth, tend to become dogma and create strong emotional reactions against the persons who question them. The readiness to question everything does not extend to relativistic metaphysics. Not strange if one takes into account that the readiness to question everything of common sense was learned together with relativistic metaphysics.

The relativistic argument itself is a valid one: There is a symmetry in the experiences, and the simplest way to explain it is to assume that it is a fundamental symmetry, a symmetry of reality. But this is not that strong. It is sufficient to present a derivation of relativistic symmetry for observables from a non-symmetric reality to meet it.
 
  • #93
Sunil said:
Those who are so simple and regular that a causal explanation can be found.
I think we have a different understanding about what that function is, so I'll give a more detailed account of its calculation.

We start with a system of N charges corresponding to a physically possible microscopic state of the whole experiment. We input this state in a simulation on some supercomputer. The simulation will output 8 symbols:

Ax, Ay, Az - the spin components (hidden variables) of particle A
Bx, By, Bz - the spin components (hidden variables) of particle B
D1 (orientation of detector 1 at the time of detection)
D2 (orientation of detector 2 at the time of detection)

So, an output like +++, --+,X,Y would correspond to an experimental run where the spins of particle A are UP,UP,UP in the X,Y and Z directions respectively, the spins of particle B are DOWN,DOWN,UP in the X,Y and Z directions respectively and the particles are measured on X (at detector D1) and on Y (at detector D2).

We repeat the simulation for a large number of initial states so that we can get a statistically representative sample. The initial states are chosen randomly, so no conspiracy or fine-tuning is involved. In the end we just count how many times a certain value of the hidden variables correspond to each detector settings. Our function will simply give you the probability of getting a certain hidden variable for a certain type of measurement.

As you can see, this function is fixed. It's just the statistical prediction of the theory (say classical EM) for this experiment. The function itself is trivially simple, but it is based on a huge amount of calculations used to solve those N-body EM problems. You CAN choose whatever initial state you want (random number generators and such) but you cannot mess with the function.

OK, so now we can understand what Bell's independence assumption amounts to. It posits that all probabilities must be the same. My question to you is simple. Why? I think it is entirely possible that those probabilities would not be the same. Some combinations could have 0 probability because, say, no initial state can evolve into them.

As far I can tell, assuming the probabilities must be the same is completely unjustified, or, using your words, is pure nonsense.

It's also nonsense to claim that if the probabilities come out differently we should abandon science. As you can see, superdeterminism is perfectly testable, you can get predictions from it and compare to experiment.
Sunil said:
Science depends on the possibility to make independent choices for experiments.
Explain me why the above test based on computer simulations where choices are not independent is not science.

Sunil said:
The classical Maxwell equations have a limiting speed, namely c. This is sufficient to prove the Bell inequalities for space-like separated measurements.
Again, classical EM involves long-range interactions. Only after the function is calculated you can see if the independence assumption is satisfied or not.

Sunil said:
You can use some value to define the actual orientation of the device so that this turning cannot causally influence the initial state.
You cannot. determinism implies that for a certain initial state the evolution is unique.

Sunil said:
The conspiracy is that this "related" translates into a correlation. This happens to be only in exceptionally simple circumstances - those circumstances where even we human beings are usually able to identify causal explanations.
a N body system where N is 10^26 or so is not simple. You need to solve the equations to get the predictions. Since the initial states are picked at random, there is no conspiracy involved.

Sunil said:
Not sure but plausible. But there are also all the gas particles in the air, and the atoms of the body of the guy who throws the coin.
We can rigorously prove that macroscopic neutral objects do not interact at the distance by calculating the Van der Waals forces between them. They are almost 0 when collisions are not involved. So, we can justify the independence assumption theoretically.

In the case of direct collisions (contact between the body and the coin) there is no independence, sure, and nobody would claim it is.

In the case of fluid mechanics we can simply rely on experiment. If the Navier–Stokes equations give good predictions we can deduce that the microscopic state of the fluid can be disregarded for that specific experiment. no need to make unjustified assumptions.
Sunil said:
One can, of course, reduce the possible outcomes to a discrete number of large subsets, and then it is possible that there will be only a single one, so no statistics involved. But these are exceptions, not the rule. Measurement errors are usually sufficient to force you to use statistics.
This is different. Sure, there are measurement errors and you can try to reduce them by repeating the experiment and use the average value. But this has nothing to do with the measurement settings being free parameters.

I want to measure the temperature of something. I put a thermometer there. Nobody cares if my decision to make that measurement at that time and in that specific way was free or not. The only important thing is that the report contains all those decisions so that others can reproduce the experiment and rely on its result.
 
Last edited:
  • #94
PeterDonis said:
I said no such thing. All I said was that QFT does not say anything about "causal relationships" at all.
Yes, but there is a principle of logic (the law of excluded middle) that says that the following statements:

P1. A caused B
P2. A did not cause B

cannot both be true.

What you are doing is oscillate between them, which is logically fallacious. If i say that EPR proves that you either need A to cause B or hidden variables you say that in there is no experimental distinction between A causing A and A not causing B. But this is irrelevant. One of them has to be false. Which one?
 
  • #95
AndreiB said:
there is a principle of logic (the law of excluded middle)
Which only applies if the binary concept being used is in fact well-defined for the domain being discussed. If "cause" is not well-defined (and so far, despite my repeated requests, you have not given a testable definition of "cause"), then sentences involving that concept are meaningless, so you can't apply logic using them.
 
  • Like
Likes bhobba
  • #96
PeterDonis said:
Which only applies if the binary concept being used is in fact well-defined for the domain being discussed. If "cause" is not well-defined (and so far, despite my repeated requests, you have not given a testable definition of "cause"), then sentences involving that concept are meaningless, so you can't apply logic using them.
Logic does not require all concepts to be experimentally testable. And there is nothing meaningless about the concept of cause. In the case of EPR, "A caused B" means that the spin measured at A (say UP) changed B from whatever state it was before (that includes no state at all) to a spin state of DOWN. So, you can replace the word "caused" to the word "changed". So, you need to choose between these options:

P1. The measurement result changed B from whatever it was before to a DOWN state.
P2. The measurement result did not change B from whatever it was before to a DOWN state.
 
  • #97
AndreiB said:
Logic does not require all concepts to be experimentally testable.
If you can't give a testable definition of "cause", then it's not well defined if you're trying to make general claims based on "cause".

You can, of course, define what "cause" means in a particular model you happen to prefer even if that definition doesn't make "cause" testable in your model. But that doesn't require me to accept your definition or your model. And you certainly can't use such a model to make general assertions.

AndreiB said:
you can replace the word "caused" to the word "changed".
Doesn't change anything about what I've said.
 
  • #98
DrChinese said:
1. In this paper*, there is no model presented that explains Bell correlations.
No, but he derives QM, so, implicitely, his model makes the same predictions as QM.

DrChinese said:
As always, I challenge anyone (and especially 't Hooft) to take the DrChinese challenge for their local realistic (or superdeterministic as the case may be) model. If there are values for various choices of measurement angles (which I choose, or think I choose), what are they for angle settings 0/120/240 degrees (entangled Type I PDC photon pairs)? The challenger provides the results, I pick the angle settings.
This is the point of superdeterminism. You CANNOT pick the settings. They are determined by the initial state of the system. What you can pick is the initial state of the whole experiment, after that you cannot touch the experiment.
DrChinese said:
According to the 't Hooft hypothesis, I will always pick pairs that yield the correct quantum expectation value.
Yes, because you are part of the universe (the CA) and you have to obey the rules of the CA. The decisions you make are already "there" in the past state of the CA.

DrChinese said:
How is it that, I sitting here at a remote keyboard, am forced to select angle pairs that come out to a 25% "apparent" match rate when the "true" match rate - according to local realism - is over 33%?
The "true match rate" you are speaking about is only expected when no interactions exist (like in the case of distant Newtonian billiard balls). If interactions exist, the "true match rate" has to be determined based on the initial state of the whole experiment, as explained in my post #93. So, you need to calculate the rate based on the CA rules, not based on Newtonian rigid-body mechanics. If 't Hooft's math is correct, the "true match rate" of his CA is the same as QM's one.

DrChinese said:
2. Clearly defined? Exactly how are Alice and Bob's apparently independent choice of settings tied to the measurement outcomes?
These are Alice's possible states: +-+-+-+-+- and -+-+-+-+-+ (you can think of + and - as representing the charge distribution of Alice, or CA states). Same charges do not like being close to each other, this is why we only have those two states.

These are the possible states of the hidden variable: +-, -+

In the no interaction case these experimental states are possible:

1. Alice: +-+-+-+-+- HV: +-
2. Alice: +-+-+-+-+- HV: -+
3. Alice: -+-+-+-+-+ HV: +-
4. Alice: -+-+-+-+-+ HV: -+

If there is interaction, states 2 and 3 are impossible (same charges are facing each other), so the statistics is different.
 
  • #99
PeterDonis said:
If you can't give a testable definition of "cause", then it's not well defined if you're trying to make general claims based on "cause".
I defined "cause". It means "change" in this context.
PeterDonis said:
You can, of course, define what "cause" means in a particular model you happen to prefer even if that definition doesn't make "cause" testable in your model. But that doesn't require me to accept your definition or your model. And you certainly can't use such a model to make general assertions.
I can certainly define what a state change means in QM. A Z-spin DOWN state will give a DOWN result on Z with 100% certainty. Any other state will not give a DOWN result on Z with 100% certainty. If at T1 you have a DOWN state and at T2 you have a different state, that qualifies as a change.

We know that, after the A measurement (UP), B is in a DOWN state. Was B in the same DOWN state before the measurement of A?
 
  • #100
AndreiB said:
This is the point of superdeterminism.
The point of "Superdeterminism" is simple: The initial conditions of the Universe were arranged that way that all measurements performed were and are consistent with the predictions of quantum mechanics.
 
  • Like
Likes EPR
  • #101
AndreiB said:
It's also nonsense to claim that if the probabilities come out differently we should abandon science. As you can see, superdeterminism is perfectly testable, you can get predictions from it and compare to experiment.
The placebo effect is perfectly testable too. Nevertheless, it makes your experimental results pretty useless, if you don't implement measures to get it under control and reduce it. We don't have to give up science, but we have to give up the hope to learn much from data affected by it ("unjustified indepencence assumptions") in an uncontrolled way.
 
  • #102
gentzen said:
The placebo effect is perfectly testable too. Nevertheless, it makes your experimental results pretty useless, if you don't implement measures to get it under control and reduce it. We don't have to give up science, but we have to give up the hope to learn much from data affected by it ("unjustified indepencence assumptions") in an uncontrolled way.
In medicine we have the problem that a lot of phenomena are not understood. The placebo effect simply means that the psychological state of the patient matters. Even if you cannot eliminate this aspect completely you could investigate the reason behind the effect and take that reason (say the presence of some chemicals in the brain) into account. Of course, this makes research harder, but it is also of a greater quality since you get a deeper understanding of the drug's action. In any case, it's not useless.

In other branches of science we understand pretty well what interacts with what so we can design the experiment accordingly. Astronomers don't just assume that stars move independently of each other because the calculation would be easier. They first look at the interactions between them and then try to model their behavior as it is, not as they like it to be. For some reason, the EM interaction is treated differently (we know it exists but we ignore it) and this leads to wrong expectancies regarding Bell tests.
 
  • #103
AndreiB said:
We start with a system of N charges corresponding to a physically possible microscopic state of the whole experiment. We input this state in a simulation on some supercomputer. The simulation will output 8 symbols:

Ax, Ay, Az - the spin components (hidden variables) of particle A
Bx, By, Bz - the spin components (hidden variables) of particle B
D1 (orientation of detector 1 at the time of detection)
D2 (orientation of detector 2 at the time of detection)

We repeat the simulation for a large number of initial states so that we can get a statistically representative sample. The initial states are chosen randomly, so no conspiracy or fine-tuning is involved. In the end we just count how many times a certain value of the hidden variables correspond to each detector settings. Our function will simply give you the probability of getting a certain hidden variable for a certain type of measurement.

As you can see, this function is fixed. It's just the statistical prediction of the theory (say classical EM) for this experiment. The function itself is trivially simple, but it is based on a huge amount of calculations used to solve those N-body EM problems. You CAN choose whatever initial state you want (random number generators and such) but you cannot mess with the function.

OK, so now we can understand what Bell's independence assumption amounts to. It posits that all probabilities must be the same.
Not clear what this means. The independence assumption is ##P(Ai=+|D2=I) = P(Ai=+)##, similarly for B and D1.
AndreiB said:
My question to you is simple. Why?
I use for the preparation of that experiment the claim you made later:
AndreiB said:
So, we can justify the independence assumption theoretically.
I store the seeds for some pseudorandom number generators in devices near A resp. B sufficiently isolated so that you can justify (with whatever means, not my problem) the independence assumption for this seed. Then the boxes will be opened a short moment before the measurements and the values Ri given by these random number generators will be used in such a way that they effectively modify the D1 resp. D2 by ##Di \to Di + Ri \mod 3##.

By construction and your independence claim I can be quite sure that the Ri are independent of the Ai,Bi, and even if the Di before the final preparation operation have a nontrivial correlation which would be sufficient to violate the Bell inequalities, nothing remains from this in the sum.

I know, a little bit unfair to combine here two different parts of your argumentation. But you have no choice here: Either you acknowledge that there are ways to make sure that there is independence - then I will use these ways to construct a Bell theorem test where we can make sure that there is independence of the decisions of the experimenters from the initial state of the pair. This will be, then, a Bell test safe against your version of superdeterminism. Or you cannot do it, then my point is proven that with superdeterminism statistical experiments are dead.
AndreiB said:
a N body system where N is 10^26 or so is not simple. You need to solve the equations to get the predictions. Since the initial states are picked at random, there is no conspiracy involved.
And that's why there will be no correlation. If you doubt, make such computations yourself, with the largest amount of what you can do. You will not find any correlations. Except for cases where it is possible to explain them in a sufficiently simple way.

AndreiB said:
I want to measure the temperature of something. I put a thermometer there. Nobody cares if my decision to make that measurement at that time and in that specific way was free or not. The only important thing is that the report contains all those decisions so that others can reproduce the experiment and rely on its result.
Just to clarify what we are arguing here about. Looks like you want to argue that superdeterminism can be somehow restricted for macroscopic bodies if they are in sufficiently stable states or so. Let's assume that is the case. Then I will build a device creating pseudorandom numbers out of such macroscopic pieces.
 
  • #104
Lord Jestocost said:
The point of "Superdeterminism" is simple: The initial conditions of the Universe were arranged that way that all measurements performed were and are consistent with the predictions of quantum mechanics.
...even though the actual underlying physical laws are completely different from quantum mechanics. In other words, the initial conditions were arranged so that we humans would be misled into inferring a completely wrong set of physical laws, which nevertheless make all the correct predictions about experimental results.
 
  • Like
Likes Lord Jestocost and DrChinese
  • #105
AndreiB said:
1. No, but he derives QM, so, implicitely, his model makes the same predictions as QM.2. This is the point of superdeterminism. You CANNOT pick the settings. They are determined by the initial state of the system. What you can pick is the initial state of the whole experiment, after that you cannot touch the experiment.

3. Yes, because you are part of the universe (the CA) and you have to obey the rules of the CA. The decisions you make are already "there" in the past state of the CA.

4. The "true match rate" you are speaking about is only expected when no interactions exist (like in the case of distant Newtonian billiard balls). If interactions exist, the "true match rate" has to be determined based on the initial state of the whole experiment, as explained in my post #93. So, you need to calculate the rate based on the CA rules, not based on Newtonian rigid-body mechanics. If 't Hooft's math is correct, the "true match rate" of his CA is the same as QM's one.
...

Everything you mention is a gigantic hand wave. Basically: assume that my conclusion is correct, and that proves my conclusion is correct.

1. In the 't Hooft reference, he does not derive QM in the 6 pages. And since there is no model presented, and no attempt to show why Bell does not apply, he certainly doesn't make any predictions. 2. But I DO pick the settings! The question I want answered is HOW my choice is controlled. If something forces me to make the choice I do, what is it and most importantly... WHERE IS IT? Is it in an atom in my brain? Or a cell? Or a group of cells? And what if I choose to make a choice by way of my PC's random number generator? How did the computer know to give me a number that would lead to my choice?3. This is basic stuff here, and simply saying "the universe made me do it" does not hold water. We are supposed to be scientists, and this is not science at all.4. You wouldn't need to jump through "superdeterministic ad hoc" rules if Bell didn't exclude all local realistic theories. Specifically, if the observed rate and the "true" rate both fell inside the classical range (>33% in my example) so that Bell Inequalities aren't violated. In case you missed it, "superdeterminsm" ONLY applies to Bell tests and the like. For all other physical laws (including the rest of QM), apparently the experimenter has completely free will. For example: tests of gravitational attraction, the speed of light, atomic and nuclear structures, etc.

BTW, the superdeterminism you are describing is contextual... and therefore violates local realism. Maintaining local realism is the point of superdeterminism in the first place. So that's a big fail. In case this is not clear to you why this is so: the SD hypothesis is that the experimenter is forced to make a specific choice. Why should that be necessary? If the true rate was always 25% (violating the Bell constraint), they there would be no need to force the experimenter to make such a choice that complies - any setting choice would support SD. Obviously, the true rate must be within the >33% region in my example to avoid contextuality issues.
 
  • Like
Likes mattt, weirdoguy and PeterDonis

Similar threads

  • Quantum Interpretations and Foundations
4
Replies
114
Views
5K
  • Quantum Interpretations and Foundations
Replies
12
Views
2K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
  • Quantum Interpretations and Foundations
Replies
2
Views
2K
  • Quantum Interpretations and Foundations
Replies
5
Views
2K
  • Quantum Interpretations and Foundations
Replies
31
Views
1K
  • Quantum Interpretations and Foundations
Replies
15
Views
2K
  • Quantum Physics
Replies
3
Views
758
  • Quantum Interpretations and Foundations
2
Replies
57
Views
2K
  • Quantum Interpretations and Foundations
2
Replies
45
Views
3K
Back
Top