I Is this popular description of entanglement correct?

  • I
  • Thread starter Thread starter entropy1
  • Start date Start date
  • Tags Tags
    entanglement
  • #51
AndreiB said:
I don't see how one can talk about Bell's theorem without mentioning hidden variables.
Bell's theorem of course explicitly includes an assumption about hidden variables, so yes, if you want to talk specifically about Bell's theorem, you're going to be talking about hidden variables. (I'll give an explicit example of such talk below.)

However, Bell's theorem, in itself, says nothing whatever about either quantum mechanics, or the results of actual experiments. Of course Bell knew that QM predicts violations of the Bell inequalities (that's why he went to the trouble of publishing his theorem), and we now know that experiments confirm those predictions of QM. But you can talk about QM and experimental results without talking about hidden variables at all. Hidden variable models are not the only possible models. You can even talk about the fact that QM/experimental results violate the Bell inequalities without talking about hidden variable models.

AndreiB said:
I claimed that one cannot prove, based on Bell's theorem that EM cannot violate them
If you are really unable to see the obvious proof, consider: EM is a local hidden variable model in the sense that Bell's theorem uses that term. (So is classical General Relativity.) Therefore, by Bell's theorem, its predictions must satisfy the Bell inequalities.

AndreiB said:
There is no other theory of relativity.
If you define "theory of relativity" to only include classical relativity, then you have excluded quantum field theory. In which case your definition of "theory of relativity" is irrelevant to this discussion.

AndreiB said:
let's assume for the sake of the argument that A caused B.
No, let's define what "A caused B" means in terms of testable predictions. Otherwise it's just meaningless noise as far as physics is concerned. Can you do that?
 
  • Like
  • Informative
Likes vanhees71, bhobba, gentzen and 2 others
Physics news on Phys.org
  • #52
PeterDonis said:
If you are really unable to see the obvious proof, consider: EM is a local hidden variable model in the sense that Bell's theorem uses that term. (So is classical General Relativity.) Therefore, by Bell's theorem, its predictions must satisfy the Bell inequalities.
Andrei is right about the fact that classical EM can in principle violate Bell's inequality. One just has to fine-tune the initial conditions of the full system accordingly. It's very difficult to write down, but as Bell pointed out in his paper "La nouvelle cuisine," local realism can in principle be saved through freedom of choice loophole, i.e. by violating a certain statistical independence assumption. The debate is not really about whether this is possible, but whether the required amount of fine-tuning is realistic.
 
  • Like
Likes mattt
  • #53
Nullstein said:
One just has to fine-tune the initial conditions of the full system accordingly.
Ah, yes, I had forgotten about superdeterminism.
 
  • #54
Nullstein said:
Andrei is right about the fact that classical EM can in principle violate Bell's inequality. One just has to fine-tune the initial conditions of the full system accordingly.
PeterDonis said:
Ah, yes, I had forgotten about superdeterminism.
Andrei doesn't argue well enough that it would "help" if he defended superdeterminism. Sabine Hossenfelder argues much more subtle, and got many more things right. But even she has trouble to make people see her point(s). Gerard 't Hooft also doesn't argue very convincingly. Tim Palmer seems to argue fine, but he is less "active/vocal/aggressive".
 
  • Like
Likes vanhees71 and weirdoguy
  • #55
AndreiB said:
OK, let's assume for the sake of the argument that A caused B. In this case you need to specify an absolute reference frame, to show that A happened first. Since QFT does not specify this frame, it's predictions would be inconsistent (different observers would disagree on how the same experiment happened).

1. Your conclusion is incorrect. QFT does not specify a cause-effect relationship, and so nothing is inconsistent. The error is asserting A causes B in a Bell test. Everyone knows that the relative ordering of Alice/Bob measurements has no observable effect on the outcome.

2. Earlier, you mentioned long range EM effects (which presumably are bound by c). Bell tests have been performed where the measurement settings are changed midflight so that there is no possibility of EM effects between the angle settings of the detection systems. And it is in fact those settings which determine the statistical outcomes in Bell tests.

3. I hate it when Superdeterminism is brought up. There is NO THEORY/INTERPRETATION OF SUPERDETERMINISM that explains Bell test results.

Superdeterminism a general idea, as specific as using the term "God". You can just as easily say God picks the individual outcomes of Bell tests, and that therefore nature can be local deterministic. You aren't explaining anything. An actual theory of Superdeterminism would necessarily have so much baggage, it would be easier to believe in God by way of Occam's Razor. Neither of which would make much sense for a quantum theory.
 
  • Like
  • Informative
Likes mattt, Lord Jestocost, vanhees71 and 4 others
  • #56
DrChinese said:
3. I hate it when Superdeterminism is brought up. There is NO THEORY/INTERPRETATION OF SUPERDETERMINISM that explains Bell test results.
Careful, there is a difference between reproducing the prediction of QM, and violating Bell's inequality. If you create a model and are not careful to properly respect the independence assumption, then you easily can produce a violation. Maybe you believe that you have found some special loophole in Bell's theorem that nobody has discovered before. And Sabine Hossenfelder correctly points out how such "easily overlooked" mistakes could look like. For example, you could have defined a hidden parameter relative to some of the detector settings ("because it seemed to be more convenient for your calculations").
 
  • #57
gentzen said:
there is a difference between reproducing the prediction of QM, and violating Bell's inequality
@DrChinese was talking about actual experimental results. The experiments in question are deliberately set up to have the measurement settings be independent of the process that produces the particles being measured. Superdeterminism in that context amounts to the claim that it is impossible in principle to set up experiments to actually have those things be independent, no matter how hard you try--even including having the events at which the measurement settings are determined be spacelike separated from the events that produce the particles being measured (as well as the measurement events themselves being spacelike separated). @AndreiB has already said he's okay with that position, but that doesn't make it any less extreme.
 
  • Like
Likes mattt, vanhees71 and bhobba
  • #58
PeterDonis said:
@DrChinese was talking about actual experimental results.
DrChinese said:
Superdeterminism a general idea, as specific as using the term "God". You can just as easily say God picks the individual outcomes of Bell tests, and that therefore nature can be local deterministic. You aren't explaining anything.
To me, that section doesn't sound like talking about actual experimental results.

PeterDonis said:
Superdeterminism in that context amounts to the claim that it is impossible in principle
Well, but superdeterminism is also the name given to a specific class of "loopholes" (or "assumptions") in Bell's theorem. Accidentally producing a model that violates those assumptions has nothing to do with "the claim that it is impossible in principle ..."

That this claim is needed for a "successful" model using superdeterminism is an additional assumption on top of the less controversial role (absence of) superdeterminism plays as a name for a specific assumption in Bell's theorem.
 
  • #59
gentzen said:
To me, that section doesn't sound like talking about actual experimental results.
Um, what? It says right there in what you quoted: "the individual outcomes of Bell tests".

gentzen said:
superdeterminism is also the name given to a specific class of "loopholes" (or "assumptions") in Bell's theorem.
They aren't loopholes in the theorem. The theorem is a mathematical theorem. They are proposed models that violate one of the assumptions of the theorem, while still being deterministic in some sense in which the proposer of the model thinks standard QM isn't.

gentzen said:
Accidentally producing a model that violates those assumptions has nothing to do with "the claim that it is impossible in principle ..."
I don't see how this is relevant at all to what I said. I was talking about superdeterministic models as they are proposed by those who argue for superdeterminism, not some accidentally produced model that happens to violate one of the assumptions of Bell's theorem.
 
  • Like
Likes vanhees71
  • #60
DrChinese said:
3. I hate it when Superdeterminism is brought up. There is NO THEORY/INTERPRETATION OF SUPERDETERMINISM that explains Bell test results.
Well, there is currently no generally satisfactory interpretation of quantum mechanics in the first place. All of them suffer from certain pecularities one way or the other, otherwise we wouldn't have these endless discussions. There has been renewed interest in superdeterminism recently because these discussions were going nowhere. And it's easy in principle to come up with a list of a hundred quadruples ##A,\alpha, B,\beta## that obeys the locality condition, but violates statistical independence.
DrChinese said:
Superdeterminism a general idea, as specific as using the term "God". You can just as easily say God picks the individual outcomes of Bell tests, and that therefore nature can be local deterministic. You aren't explaining anything. An actual theory of Superdeterminism would necessarily have so much baggage, it would be easier to believe in God by way of Occam's Razor. Neither of which would make much sense for a quantum theory.
I agree that superdeterminism is a peculiar solution to the Bell mystery, but non-locality is no less peculiar. In fact, the knob on Alice's polarizer must, when turned, somehow magically have to capability to modify Bob's particle despite the fact that it was never built specifically to have this capability. Why doesn't it manipulate any other particle in the universe as well? Why doesn't it turn on the TV in the livingroom? What is so specific about Bob's particle that a completely unrelated knob on Alice's device can manipulate its properties? The excess of the speed of light is not even the most peculiar feature in this scenario.

Thus, for me, the lesson of Bell's theorem is not that the world is non-local, but that we need to give up the idea of classical determinism altogether and replace it with something else. Superdeterminism is not the solution either. Maybe we also need to rethink what a causal mechanism is.
 
  • #61
Nullstein said:
the knob on Alice's polarizer must, when turned, somehow magically have to capability to modify Bob's particle
This is only true according to certain interpretations of QM.
 
  • #62
PeterDonis said:
This is only true according to certain interpretations of QM.
Agreed, but I was specifically talking about non-local interpretations of QM in that post.
 
  • #63
Nullstein said:
I was specifically talking about non-local interpretations of QM in that post.
Which interpretations of QM do you consider "non-local"?
 
  • #64
PeterDonis said:
Which interpretations of QM do you consider "non-local"?
When I wrote that paragraph, I was specifically thinking about Bohmian mechanics, but I guess it applies to all non-local hidden variable theories.
 
  • #65
PeterDonis said:
I don't see how this is relevant at all to what I said. I was talking about superdeterminism
You replied to me, and I was talking about how easy it is to accidentally produce a superdeterminsic model. And quoting "the individual outcomes of Bell tests" out of context doesn't make you more right. You simply did not try to understand what I was trying to say.

Bell's theorem is (also) a mathematical theorem, and loading one of its assumptions with connotations can sometimes trigger a mathematician (like me) to protest. What triggered me was the peppering of the connotations with statements like "You can just as easily say God picks the individual outcomes ... You aren't explaining anything."

In the end, I believe nature is non-local in certain ways. So I certainly don't try to fight the conclusions of Bell's theorem. But I do think that Sabine Hossenfelder and Tim Palmer are on a right track to make progress in our understanding of quantum mechanics. Not in the sense of removing counter-intuitive elements of QM, but in the sense of making an additional counter-intuitive element of QM more concrete and "analyzable", just like Bell did with non-locality.
 
  • #66
gentzen said:
I was talking about how easy it is to accidentally produce a superdeterminsic model.
And I was pointing out that that is irrelevant to this discussion, since, first, @DrChinese was not proposing any model, he was talking about the actual experimental results, and second, those who claim that superdeterminism is a valid explanation of those experimental results are not basing such a claim on models produced "accidentally". They aren't basing the claim on any models at all. Nobody has proposed an actual superdeterministic model that explains the actual experimental results we have on measurements of entangled particles. Which is what @DrChinese said.

What people like Hossenfelder should do, if they want to actually argue for superdeterminism as a valid explanation, is not to complain that other people could "accidentally" produce a superdeterministic model; it is to deliberately produce such a model themselves and show how it explains the experimental results. Then we would have an actual model to evaluate.

gentzen said:
And quoting "the individual outcomes of Bell tests" out of context doesn't make you more right.
I didn't quote it out of context. I quoted it directly from what you quoted from @DrChinese; you accompanied that quote with the claim that "that section" (what you quoted) "didn't sound like talking about actual experimental results". I don't know what would sound like that to you if the explicit phrase "the individual outcomes of Bell tests" doesn't.

gentzen said:
You simply did not try to understand what I was trying to say.
As the saying goes, the fact that I disagree with you does not mean I don't understand your position.
 
  • Like
Likes vanhees71
  • #67
gentzen said:
loading one of its assumptions with connotations can sometimes trigger a mathematician (like me) to protest.
The fact that you were "triggered" does not automatically make what you are claiming true.
 
  • #68
PeterDonis said:
As the saying goes, the fact that I disagree with you does not mean I don't understand your position.
But here again you give me the impression that you don't try to understand me. What I wrote was not related to a position. I replied "Careful, there is a difference ..." to a section that read "I hate it when ... NO THEORY/INTERPRETATION OF SUPERDETERMINISM ...".

PeterDonis said:
The fact that you were "triggered" does not automatically make what you are claiming true.
What do I claim from your point of view? I explain why I replied to DrChinese. If you reply to me indicating that you don't get why I replied, or how my reply is related to what DrChinese wrote, it is only natural that I will try to clarify those points.
 
  • Skeptical
Likes weirdoguy
  • #69
By the way, it's not true that there exist no superdeterministic models for the Bell correlations. A couple of them are cited in the paper https://arxiv.org/abs/1511.00729 (section 4.2) and the author even argues that they only require a minor violation of statistical independence.
 
  • #70
gentzen said:
What I wrote was not related to a position. I replied "Careful, there is a difference ..." to a section that read "I hate it when ... NO THEORY/INTERPRETATION OF SUPERDETERMINISM ...".
Yes, and your reply had nothing to do with what @DrChinese actually said. Your reply talked about models and how easy it is to "accidentally" produce a model that violates the independence assumption. @DrChinese was talking about actual experimental results and whether any superdeterministic model could explain them.

For an example of what a substantive reply to what @DrChinese actually said would look like, see post #69 by @Nullstein just now.

gentzen said:
What do I claim from your point of view?
You appear to me to be claiming that superdeterminism can provide a valid explanation for the actual experimental results we have on measurements of entangled particles.

gentzen said:
I explain why I replied to DrChinese.
You explained that you were "triggered", yes. That explains why you replied, but as I pointed out, it doesn't serve as justification for what you said being true. Or having anything to do with what @DrChinese said, for that matter.

gentzen said:
If you reply to me indicating that you don't get why I replied, or how my reply is related to what DrChinese wrote, it is only natural that I will try to clarify those points.
I have already explained, several times now, why your reply was not related to what @DrChinese wrote. Nothing in your responses has addressed anything I said about that.
 
  • #71
PeterDonis said:
You appear to me to be claiming that superdeterminism can provide a valid explanation for the actual experimental results we have on measurements of entangled particles.
Good to know. I certainly did not intent to claim that. I did try to defend Hossenfelder and Palmer to a certain limited extent, namely that they do understand many important points related to superdeterminism. Not because I want to convince anybody, but simply because it is my honest opinion, and I don't want to lie. But because I have the strong impression that the word "superdeterminism" is heavily loaded with connotations, I also don't want to be drawn into discussions about it. (But I found it valid to reply with "Careful, ..." to a statement that went "I hate ... SOME STATEMENT IN ALL CAPS ...", because it exemplified exactly this loaded state of affairs.)

PeterDonis said:
Nothing in your responses has addressed anything I said about that.
Good to know. Then I will stop here. Thanks for trying to clarify to me what DrChinese really intended to say. Sorry that I have misunderstood his intention because he somehow "triggered" me.
 
  • #72
Nullstein said:
Well, there is currently no generally satisfactory interpretation of quantum mechanics in the first place.

I do not think it is a sound scientific practice to state personal opinions as scientific facts. Much better to say there is no generally accepted interpretation of QM everybody agrees on. Some think many interpretations are simply a continuation of discussions about what probability is:
https://math.ucr.edu/home/baez/bayes.html

Yet strangely, interpretations of probability do not seem to generate much discussion. Most that use probability, such as actuaries, do not worry about it at all. Indeed, when I studied it, I was blissfully unaware of the issues. But then again, I did applied math. Pure math guys may go into it more.

Thanks
Bill
 
Last edited:
  • Like
Likes vanhees71 and gentzen
  • #73
AndreiB said:
let S0 be the microscopic state (position/momenta of charge particles+electric/magnetic fields) of the source at a certain (initial) time before the experiment. Let D10 and D20 be the corresponding microscopic states of the detectors. Since all charged particles interact, the hidden variable, lambda (the polarisations of the emitted EM waves at some later time) would be given by a very complicated function like:

lambda = f(S0,D10,D20).

So, lambda cannot be independent of either D10 or D20, it's a function of them.
Being a function is not in conflict with being independent. A good pseudorandom number generator gives you a pseudorandom number as function u of the seed s and the number of applications of the generator i. In principle it maybe simply defined by a function u(i+1,s) = F(u(i,s)) which gives widely different results even if only a single bit is changed. Nonetheless, for all the usual tests of randomness, these sequences will look like random sequences. In particular ##P(u(i+1,s)|u(i,s)\in[u_0,u_1]) = P(u(i+1,s))## if that part ##[u_0,u_1]## specifies only a minor part of the bits contained in the value ##u(i,s)## itself. So even explicit functions can lead to independence according to all the applicable statistical criteria.
AndreiB said:
I have no idea what correlations, if any, can be generated in this way.
And I tell you that to destroy remaining correlations is easy, adding a pseudorandom number will do the job. Instead, creating correlations is impossible. Observable correlations require causal explanations.

AndreiB said:
There is no need to posit any conspiracy. Interacting objects are not independent, this is the only point I am trying to make. Since one premise of Bell's theorem is not fulfilled, the conclusion does not follow. The conclusion might still be true, but not necessarily so.
It is a known property of conspiracy theories that one cannot reject them by pure logic. The basic principles of causality you have to accept too.

AndreiB said:
As explained, the above argument applies only to interacting systems. Even in a Bell test, the macroscopic settings of the detectors are independent parameters (since the interaction between their constituent particles cannot determine a macroscopic rotation of the device).
No. Microscopic causes can influence macroscopic detector settings and often do it. Without this, no quantum measurements would be possible at all. The experimenters can decide to use microscopic particles by design, say, by using a Geiger counter to decide what to measure. But even if they throw macroscopic dices the results can be influenced by microscopic turbulences which can be caused by even atomar causes.

AndreiB said:
So, you can assume independence in all experiments where the microscopic arrangement is not relevant, which includes almost everything except Bell tests and a few other quantum experiments.
I disagree. There is no experiment with statistical outcomes where the outcome does not depend on microscopic causes too.

AndreiB said:
I also think that the importance of the independence assumption is greatly exaggerated. Most experiments do not depend on it.
I disagree. There is no experiment with statistical outcomes which could not be explained away if one cannot make an independence assumption.
AndreiB said:
It's not about inventing anything. You analyze the situation and determine, based on what we know, what is independent and what is not. It's a scientific, objective criteria.
Using your way of reasoning in the microscopic world the conclusion will be simple - nothing is independent. So, no analysis is really necessary, the result will be all the same.
 
  • #74
DrChinese said:
The error is asserting A causes B in a Bell test. Everyone knows that the relative ordering of Alice/Bob measurements has no observable effect on the outcome.
It follows only that as the hypothesis ##A\to B##, as the hypothesis ##B\to A## is viable given the outcome. I don't understand why this would make one of the two possible causal explanations an error.
DrChinese said:
An actual theory of Superdeterminism would necessarily have so much baggage, it would be easier to believe in God by way of Occam's Razor. Neither of which would make much sense for a quantum theory.
Good point.
 
  • #75
PeterDonis said:
But you can talk about QM and experimental results without talking about hidden variables at all. Hidden variable models are not the only possible models. You can even talk about the fact that QM/experimental results violate the Bell inequalities without talking about hidden variable models.
The problem is that we have the EPR argument, a form of it being presented in my #7 post. This leaves you with two options: non-locality (in the sense that A causes B even if A and B are space-like) and hidden variables. Rejecting hidden variables necessarily implies non-locality. I am not saying it's wrong but I still think that locality is the most reasonable option, hence hidden variables are the most reasonable option.

PeterDonis said:
If you are really unable to see the obvious proof, consider: EM is a local hidden variable model in the sense that Bell's theorem uses that term. (So is classical General Relativity.) Therefore, by Bell's theorem, its predictions must satisfy the Bell inequalities.
My whole point is that EM has not been shown to obey the statistical independence requirement. Without statistical independence, Bell's conclusion does not follow.

PeterDonis said:
If you define "theory of relativity" to only include classical relativity, then you have excluded quantum field theory. In which case your definition of "theory of relativity" is irrelevant to this discussion.
Relativity is about the space-time structure. There is no quantum theory of space-time. QFT is just an example of physical theory using the SR background in the same way non-relativistic QM uses the Newtonian background.

PeterDonis said:
No, let's define what "A caused B" means in terms of testable predictions. Otherwise it's just meaningless noise as far as physics is concerned. Can you do that?
It's a reductio ad absurdum argument. If you assume, for the sake of the argument that it is the case that A caused B you get into some unpleasant consequences, like the requirement of defining an absolute reference frame. If you don't like those consequences you must deny the premise (A caused B) which, given the EPR argument presented in my #7 post, necessary implies the existence of hidden variables.

I do not think that A causes B, so I take the hidden variable route.
 
  • #76
Nullstein said:
Andrei is right about the fact that classical EM can in principle violate Bell's inequality. One just has to fine-tune the initial conditions of the full system accordingly.
Why do you need to fine-tune the initial conditions?
 
  • #77
DrChinese said:
1. Your conclusion is incorrect. QFT does not specify a cause-effect relationship, and so nothing is inconsistent. The error is asserting A causes B in a Bell test.
It was a reply to PeterDonis. He seemed to be fine with a causal relationship between space-like events so I presented him a reductio ad absurdum argument. I do not actually argue that A causes B. However, QFT does not forbids it either.

DrChinese said:
Everyone knows that the relative ordering of Alice/Bob measurements has no observable effect on the outcome.
I never said it does.

DrChinese said:
2. Earlier, you mentioned long range EM effects (which presumably are bound by c). Bell tests have been performed where the measurement settings are changed midflight so that there is no possibility of EM effects between the angle settings of the detection systems. And it is in fact those settings which determine the statistical outcomes in Bell tests.
EM is a deterministic theory. In order to be able to change "midflight" you need to start with a initial state that necessarily determines you to make that change in the way you make it and at that specific time you make it. That initial state also determines the hidden variable (because of the long-range interactions) so one cannot establish those variables to be independent.

DrChinese said:
3. I hate it when Superdeterminism is brought up. There is NO THEORY/INTERPRETATION OF SUPERDETERMINISM that explains Bell test results.
Well, t'Hooft disagrees with you:

Explicit construction of Local Hidden Variables for any quantum theory up to any desired accuracy​

https://arxiv.org/abs/2103.04335

DrChinese said:
Superdeterminism a general idea, as specific as using the term "God".
No, it's a very clearly defined idea. The hidden variable and the settings of the detectors are not statistically independent variables.

DrChinese said:
You can just as easily say God picks the individual outcomes of Bell tests, and that therefore nature can be local deterministic.
Of course you can. But just because you can imagine stupid superdeterministic theories does not mean that all such theories are necessarily stupid.

DrChinese said:
You aren't explaining anything.
I am following the logically available options. In the end we will see if an explanation would emerge or not. What I can tell you with certainty is that without hidden variables there is no local explanation of EPR correlations.

DrChinese said:
An actual theory of Superdeterminism would necessarily have so much baggage, it would be easier to believe in God by way of Occam's Razor.
What evidence do you have for this assertion?
 
  • Like
Likes gentzen
  • #78
Sunil said:
Being a function is not in conflict with being independent. A good pseudorandom number generator gives you a pseudorandom number as function u of the seed s and the number of applications of the generator i. In principle it maybe simply defined by a function u(i+1,s) = F(u(i,s)) which gives widely different results even if only a single bit is changed. Nonetheless, for all the usual tests of randomness, these sequences will look like random sequences. In particular ##P(u(i+1,s)|u(i,s)\in[u_0,u_1]) = P(u(i+1,s))## if that part ##[u_0,u_1]## specifies only a minor part of the bits contained in the value ##u(i,s)## itself. So even explicit functions can lead to independence according to all the applicable statistical criteria.
Indeed, SOME functions would still allow for independence. But some other functions would not. So, depending on the function you have, Bell's theorem applies or not, which means that you cannot a-priori assume that a certain local theory with long-range interactions is ruled out. You need to compute the relevant function and see if the hidden variables are independent or not on the measurement settings.

in the case of classical EM for example, do you have any evidence about what that function looks like?

Sunil said:
And I tell you that to destroy remaining correlations is easy, adding a pseudorandom number will do the job. Instead, creating correlations is impossible. Observable correlations require causal explanations.
You cannot "add a pseudorandom number" to the function. The function is defined by the structure of the theory. In the case of EM, the states must satisfy Maxwell's equations.

Sunil said:
It is a known property of conspiracy theories that one cannot reject them by pure logic.
Since my argument does not depend on any special choice of initial state, I find the discussion about "conspiracy" a red-herring. You are free to choose whatever initial state you want, as long as such a state is physically possible (obeys Maxwell's equations or whatever equations the theory under investigation has). then your hidden variable would be related to the measurement settings by some function that follows from the mathematical structure of the theory. Where is the conspiracy?

Sunil said:
No. Microscopic causes can influence macroscopic detector settings and often do it.
Sure, but not always. You have independence in those situations when the microscopic state does not influence the macroscopic state.

Sunil said:
I disagree. There is no experiment with statistical outcomes where the outcome does not depend on microscopic causes too.
The statistics of a coin flip do not depend on the charge distribution of the electrons and nuclei inside the coin.

The trajectory of a billiard ball depends on the initial position and momentum, not on the microscopic configuration of its internal charges.

Fluid mechanics does not depend on the exact arrangement of the molecules.

The efficacy of a treatment does not depend on the exact arrangement of the drug molecules.

Sunil said:
I disagree. There is no experiment with statistical outcomes which could not be explained away if one cannot make an independence assumption.
Not all experiments have statistical outcomes.

Sunil said:
Using your way of reasoning in the microscopic world the conclusion will be simple - nothing is independent.
Indeed, in the microscopic world nothing is independent. It's a well known implication of QM. It's called contextuality.

Sunil said:
So, no analysis is really necessary, the result will be all the same.
Yes, an analysis is necessary, since not all experiments depend on the microscopic states. The billiard balls on a table do not have independent microscopic states. But this is irrelevant if I study their collisions.
 
  • #79
AndreiB said:
The problem is that we have the EPR argument, a form of it being presented in my #7 post. This leaves you with two options: non-locality (in the sense that A causes B even if A and B are space-like) and hidden variables. Rejecting hidden variables necessarily implies non-locality.
That's the EPR argument. (Today you have to add some more rejections of complete nonsense like superdeterminism).
AndreiB said:
I am not saying it's wrong but I still think that locality is the most reasonable option, hence hidden variables are the most reasonable option.
You cannot save locality with hidden variables, that's Bell's theorem.
 
  • #80
Sunil said:
That's the EPR argument. (Today you have to add some more rejections of complete nonsense like superdeterminism).
It so happens that you cannot win an argument by simply labeling your oponent's point "nonsense". If this is all you have to say, I can relax and drink my coffee.

Sunil said:
You cannot save locality with hidden variables, that's Bell's theorem.
In principle you can. just because you call the solution "nonsense" doesn't make it so.
 
  • #81
AndreiB said:
Indeed, SOME functions would still allow for independence. But some other functions would not.
Those who are so simple and regular that a causal explanation can be found.
AndreiB said:
So, depending on the function you have, Bell's theorem applies or not, which means that you cannot a-priori assume that a certain local theory with long-range interactions is ruled out.
There are, of course, such experimental configurations where you cannot exclude that a causal explanation for correlations between the experimenters decisions and the prepared state. In medicine, studies which are not double-blind come to mind.

Science depends on the possibility to make independent choices for experiments. That there may be also bad experiments where correlations appear because of design errors is harmless.

AndreiB said:
in the case of classical EM for example, do you have any evidence about what that function looks like?
The classical Maxwell equations have a limiting speed, namely c. This is sufficient to prove the Bell inequalities for space-like separated measurements.
AndreiB said:
You cannot "add a pseudorandom number" to the function. The function is defined by the structure of the theory. In the case of EM, the states must satisfy Maxwell's equations.
You can use some value to define the actual orientation of the device so that this turning cannot causally influence the initial state. And to this value of the turning angle (which may depend on whatever, no problem) you can add that pseudorandom number.
AndreiB said:
Since my argument does not depend on any special choice of initial state, I find the discussion about "conspiracy" a red-herring.
Whatever you name it, it does not matter. You need a consistent pattern of correlations in all Bell experiments, while you are not even able to identify correlations in simple pseudorandom number generators where everything is known and solvable on every computer.

AndreiB said:
You are free to choose whatever initial state you want, as long as such a state is physically possible (obeys Maxwell's equations or whatever equations the theory under investigation has). then your hidden variable would be related to the measurement settings by some function that follows from the mathematical structure of the theory. Where is the conspiracy?
The conspiracy is that this "related" translates into a correlation. This happens to be only in exceptionally simple circumstances - those circumstances where even we human beings are usually able to identify causal explanations. And already in quite simple pseudorandom number generators you have de facto no chance.
AndreiB said:
Sure, but not always. You have independence in those situations when the microscopic state does not influence the macroscopic state.
Possible, but why should we care about the possibility to make bad design?
AndreiB said:
The statistics of a coin flip do not depend on the charge distribution of the electrons and nuclei inside the coin.
Not sure but plausible. But there are also all the gas particles in the air, and the atoms of the body of the guy who throws the coin.
AndreiB said:
The trajectory of a billiard ball depends on the initial position and momentum, not on the microscopic configuration of its internal charges.
But on those of the billiard player. And the air.
AndreiB said:
Fluid mechanics does not depend on the exact arrangement of the molecules.
Sure? The equations of continuum mechanics not, by definition. But if there is turbulence, minor distortions will heavily increase in size. Not much turbulence is necessary to make the dependence
AndreiB said:
The efficacy of a treatment does not depend on the exact arrangement of the drug molecules.
No, but it depends on the knowledge about who has got the real medicine and who has got the placebo if it acts like a placebo. Some aspects of an experiment - like those in your examples - can be more or less completely controlled, so that the remaining uncertainty does not matter, others cannot. And in real experiments you always have to assume that some of the aspects which you cannot control are independent of what really matters in your experiment.
AndreiB said:
Not all experiments have statistical outcomes.
One can, of course, reduce the possible outcomes to a discrete number of large subsets, and then it is possible that there will be only a single one, so no statistics involved. But these are exceptions, not the rule. Measurement errors are usually sufficient to force you to use statistics.
AndreiB said:
Indeed, in the microscopic world nothing is independent. It's a well known implication of QM. It's called contextuality.
Nothing special for QM. Contextuality is also common in human interactions.
AndreiB said:
Yes, an analysis is necessary, since not all experiments depend on the microscopic states. The billiard balls on a table do not have independent microscopic states. But this is irrelevant if I study their collisions.
If you reduce science to experiments with deterministic outcomes, so that no statistics is necessary, not much remains. If you have statistics, it is difficult to live without any independence assumptions. I would allow for some exceptions - but these would be exceptions, not the rule.
 
  • Like
Likes gentzen
  • #82
AndreiB said:
He seemed to be fine with a causal relationship between space-like events
I said no such thing. All I said was that QFT does not say anything about "causal relationships" at all. It just says that spacelike separated measurements commute. You are the one who keeps harping on "causal relationships" without being able to give any testable definition of the concept.
 
  • #83
AndreiB said:
1. Well, t'Hooft disagrees with you:

Explicit construction of Local Hidden Variables for any quantum theory up to any desired accuracy​

https://arxiv.org/abs/2103.043352. No, it's a very clearly defined idea. The hidden variable and the settings of the detectors are not statistically independent variables.

1. In this paper*, there is no model presented that explains Bell correlations.

As always, I challenge anyone (and especially 't Hooft) to take the DrChinese challenge for their local realistic (or superdeterministic as the case may be) model. If there are values for various choices of measurement angles (which I choose, or think I choose), what are they for angle settings 0/120/240 degrees (entangled Type I PDC photon pairs)? The challenger provides the results, I pick the angle settings. According to the 't Hooft hypothesis, I will always pick pairs that yield the correct quantum expectation value.

How is it that, I sitting here at a remote keyboard, am forced to select angle pairs that come out to a 25% "apparent" match rate when the "true" match rate - according to local realism - is over 33%? But hey, if it works, I will gladly acknowledge a winner. Always looking for takers to the DrChinese challenge. :smile:2. Clearly defined? Exactly how are Alice and Bob's apparently independent choice of settings tied to the measurement outcomes? This is the crucial detail always skipped over. What quantum effect causes human brains to make the choices to precisely yield the grossly misleading value predicted by QM? Because I missed that section of the "model"...There is no model of "Superdeterminism" that is any more specific than saying "God made me choose these measurement settings".*All gigantic claims and hand-waving, no proofs or meaningful examples. I am well aware of 't Hooft's deserved reputation, but that does not give him a pass in this area. No serious researcher in the area of Bell entanglement would cite these papers as accepted or useful science. This paper has been cited by 1 paper, by D. Dolce. "Prediction of Unified New Physics beyond Quantum Mechanics..." has 34 references, 17 to his own papers - draw your own conclusion.
 
  • Like
  • Love
Likes mattt, Lord Jestocost, bhobba and 1 other person
  • #84
DrChinese said:
There is no model of "Superdeterminism" that is any more specific than saying "God made me choose these measurement settings".
That's your claim, but can you prove it? There are several models presented in https://arxiv.org/pdf/1511.00729.pdf and in the conclusion, the author says:
Given that standard quantum mechanics satisfies statistical locality and measurement independence, Occam’s razor suggests that it is the intuition behind determinism (and thus statistical completeness) that must be given up. On the other hand, it may be argued that relaxing measurement dependence is relatively far more efficient: only 1/15 of a bit of measurement dependence is required to model the singlet state, in comparison to 1 bit of communication in nonlocal models, and 1 bit of shared randomness in nondeterministic models [11].
I don't disagree that superdeterminism is a peculiar and unlikely solution to the problem, but one has to admit that non-locality is just as peculiar and thus, one shouldn't delegitimize research in this direction. I do believe that hidden variable theories are the wrong approach in general (independent of whether they are non-local or superdeterministic), but in science we shouldn't judge an approach based on our personal preferences. There's no reason to be harsh on people who study these approaches in a rigorous, scientific way.
 
  • #85
Nullstein said:
I don't disagree that superdeterminism is a peculiar and unlikely solution to the problem, but one has to admit that non-locality is just as peculiar
No. Non-locality is a minor problem, science has lived and developed without problems during the time gravity has been described by non-local Newtonian theory. It is not even very peculiar, because non-locality is the natural limit of a very large speed of causal influences. So, if the speed of causal influences is too fast to be measured, we have to expect non-locality.

Superdeterminism is, instead, the ultimate conspiracy theory and, if taken seriously, the end of any statistical experiment.
Nullstein said:
and thus, one shouldn't delegitimize research in this direction. There's no reason to be harsh on people who study these approaches in a rigorous, scientific way.
Criticism of wrong approaches is a necessary part of science.

Of course, it would be preferable if independence of science would return, so that scientists would be free to do research in whatever direction without the certainty that they will not get the next grant if their research direction is not liked by the mainstream. But there is nothing one can do against this, independence of science is a nice memory of the past, such is life.

But it does not follow that, as some sort of compensation, one should not criticize those who support nonsensical research directions for this, and not even that one should not use harsh words to name such research directions. I think such criticism, however harsh, is much better than ignorance.

Which is, unfortunately, the way outsiders are handled today by the mainstream. Criticizing outsiders is also nothing favored by the mainstream, those who start this also risk to lose the job once the next grant is needed.
 
  • #86
AndreiB said:
just because you call the solution "nonsense" doesn't make it so.
Of course. One has to give counterarguments against the arguments provided by those who propose nonsense. As I have done, and will continue to do. But this will not prevent me from naming superdeterminism nonsense. Because it is - I continue to claim that superdeterminism, if taken seriously, would be the end of science. There would be no way to distinguish it from astrology.
 
  • #87
Sunil said:
No. Non-locality is a minor problem, science has lived and developed without problems during the time gravity has been described by non-local Newtonian theory. It is not even very peculiar, because non-locality is the natural limit of a very large speed of causal influences. So, if the speed of causal influences is too fast to be measured, we have to expect non-locality.
I explained in post #60 why it's not even the speed of the interaction that is most peculiar about it.

Moreover, non-locality and superdeterminism are quite similar, because in a completely deterministic theory, the non-local cause in the present can be evolved back into the past. If Alice turning the knob has caused Bob's particle's properties to change, then there is some event in the past that has caused Alice to turn the knob in the first place, which can be taken to be the ultimate cause. Similarly you can evolve back the event of Bob's particle's properties changing. Since everything had the chance to interact during the Big Bang, at some point in the far away past, there will be some overlap between the two evolved back events that could act as a superdeterministic common cause (i.e. conditioning on it will violate statistical independence).

Sunil said:
Superdeterminism is, instead, the ultimate conspiracy theory and, if taken seriously, the end of any statistical experiment.
Not a worse conspiracy than non-locality. And it's not true that it would be the end of any statistical experiment. One can still come up with theories, some of them being superdeterministic, and then compare them to the data. In the end, one picks the one which explains the data in the most elegant way and that may or may not be a superdeterministic theory.
Sunil said:
Criticism of wrong approaches is a necessary part of science.
Criticism should be based on arguments though and not on personal preferences or even polemics.
Sunil said:
But it does not follow that, as some sort of compensation, one should not criticize those who support nonsensical research directions for this, and not even that one should not use harsh words to name such research directions. I think such criticism, however harsh, is much better than ignorance.
Calling something "nonsensical" is not an acceptable form of criticism and being harsh isn't either. As I said, only arguments should matter.
 
  • #88
Nullstein said:
I explained in post #60 why it's not even the speed of the interaction that is most peculiar about it.
I find there
In fact, the knob on Alice's polarizer must, when turned, somehow magically have to capability to modify Bob's particle despite the fact that it was never built specifically to have this capability. Why doesn't it manipulate any other particle in the universe as well?
The polarizer interacts completely non-magical with the particle which is in Alice' laboratory. That particle was prepared together with Bob's particle, in a very special state. If there is no such special history of relations, there is no entanglement and no non-local influence. (The history of these special relations may be quite complex, but without any such history there is nothing.)
Nullstein said:
Moreover, non-locality and superdeterminism are quite similar, because in a completely deterministic theory, the non-local cause in the present can be evolved back into the past. If Alice turning the knob has caused Bob's particle's properties to change, then there is some event in the past that has caused Alice to turn the knob in the first place, which can be taken to be the ultimate cause. Similarly you can evolve back the event of Bob's particle's properties changing. Since everything had the chance to interact during the Big Bang, at some point in the far away past, there will be some overlap between the two evolved back events that could act as a superdeterministic common cause.
You forget that not every common bit of information in the past gives a correlation. And that there is a well-defined math to control for some common cause which can tell you, for example, that only some part of the correlation is explained by this common cause but a remaining part not.

In dBB theory the non-local influences are precisely described mathematically. They are quite restricted, and the influence can be computed. Superdeterminism has no mathematics which allows to compute something. Don't forget that mathematicians have proven a lot of things that some pseudorandom sequences survive many tests for randomness. And randomness is what has to be expected if the dependencies become too complex. As they necessarily become in superdeterminism.

That there exist some possible common causes ##C\to A, C\to B## in the past is usually nothing that matters - it matters only if their influence on ##A## resp. ##B## is strong enough to explain the correlation. You need ##P(A)\neq P(A|C)## or ##P(B)\neq P(B|C)##, and the difference should not be some negligible epsilon, but enough to explain the observable correlation ##P(AB)\neq P(A)P(B)##.
Nullstein said:
Not a worse conspiracy than non-locality. And it's not true that it would be the end of any statistical experiment. One can still come up with theories, some of them being superdeterministic, and then compare them to the data.
No. No statistical experiment can give you anything if you are not allowed to assume independence as the default assumption if no correlation is known (zero hypothesis). You have looked in a nasty way at the device. This is independent, and therefore does not change anything? Sorry, no, this nasty look has superdeterministically caused whatever.

Nullstein said:
In the end, one picks the one which explains the data in the most elegant way and that may or may not be a superdeterministic theory.
It cannot be a superdeterministic theory because such a theory cannot predict anything. Because to predict something you always have to make a lot of simplifying assumptions, and that most of the things of the universe are independent from what it studied is what you always have to assume.

Or you go back to astrology. But given that the positions of the planets are, of course, visible to you and can therefore influence you, as well as your date of birth, astrology is much more compatible with science than superdeterminism.
Nullstein said:
Calling something "nonsensical" is not an acceptable form of criticism and being harsh isn't either. As I said, only arguments should matter.
Ok, feel free to ignore that I name superdeterminism nonsense. And care about my arguments which I propose too. It remains nonsense and I will continue to name it nonsense.
 
  • #89
Sunil said:
The polarizer interacts completely non-magical with the particle which is in Alice' laboratory. That particle was prepared together with Bob's particle, in a very special state. If there is no such special history of relations, there is no entanglement and no non-local influence. (The history of these special relations may be quite complex, but without any such history there is nothing.)
The interaction between Alice's polarizer and Alice's particle is plausible and non-magical, because they are in direct contact with each other. However, Alice's polarizer is not in direct contact with Bob's particle and neither in direct contact with the TV in the livingroom. So why would it be plausible that her polarizer can modify Bob's particle but not turn on the TV in the livingroom? Entanglement is just a statistical feature of the whole population of identically prepared systems and not a property of the individual particles. For example, if the measurement axes are not aligned, the correlation may be only 10%, so it can't be an individual property of the particles and just shows up in the statistics of the whole ensemble.
Sunil said:
You forget that not every common bit of information in the past gives a correlation. And that there is a well-defined math to control for some common cause which can tell you, for example, that only some part of the correlation is explained by this common cause but a remaining part not.
No I didn't forget that. I've shown you a recipe to construct a common cause event in the past given two events in the present under the assumption that they are linked by a non-local cause-and-effect relationship and the assumption that they had the chance to interact at some time in the past (as would be the case in a Big Bang scenario). This common cause satisfies the required conditional probability relations. Under the given assumptions, the existence of a non-local explanation implies the existence of a superdeterministic explanation.
Sunil said:
No. No statistical experiment can give you anything if you are not allowed to assume independence as the default assumption if no correlation is known (zero hypothesis). You have looked in a nasty way at the device. This is independent, and therefore does not change anything? Sorry, no, this nasty look has superdeterministically caused whatever.
A superdeterministic theory can make statistical predictions which can be falsified, just like any other theory. The situation is not worse than in any other hidden variable theory. We cannot falsify the claim that there are hidden variables, but we can draw conclusions from the theory and falsify it, if the predictions don't match the experiment.
Sunil said:
It cannot be a superdeterministic theory because such a theory cannot predict anything.
A superdeterministic theory can of course predict something. For example, the paper I cited earlier describes superdeterministic theories that predict the Bell correlations and the theories would have to be rejected if the predictions didn't match the experiment.
 
  • #90
Sunil said:
No. Non-locality is a minor problem,

As Peter keeps correctly pointing out, statements like the above depend on what you mean by locality. Also, it expresses a personal reaction that is different to scientific fact. People do it all the time, but it is wise to realize what is going on. As shown by the many discussions here, Bell can be 'confounding' to many. Yes, I have changed my views on it several times since posting here for over ten years now. Bell is airtight, and its experimental confirmation is one of the outstanding achievements of 20th Century science, but how you view it does lead to subtleties. Superdeterminism is IMHO the last gasp of those that, for some reason, want an out - but again, just my opinion. I am a bit surprised by 't Hooft, whose How To Be A Good Theoretical Physicist IMHO is a gem - up there with the Feynman Lectures, Landau Theoretical Minimum and Susskind's Theoretical Minimum
https://www.goodtheorist.science/qft.html

Thanks
Bill
 
  • Like
Likes gentzen
  • #91
Nullstein said:
The interaction between Alice's polarizer and Alice's particle is plausible and non-magical, because they are in direct contact with each other. However, Alice's polarizer is not in direct contact with Bob's particle and neither in direct contact with the TV in the livingroom. So why would it be plausible that her polarizer can modify Bob's particle but not turn on the TV in the livingroom?
Alice' polarizer influences the configuration of the Alice' particle, via direct contact. Alice' particle interacts with Bobs particle using entanglement created by the preparation. For the details, look at the Bohmian velocity.

Nullstein said:
Entanglement is just a statistical feature of the whole population of identically prepared systems and not a property of the individual particles.
The "just" in "just a statistical feature" is your interpretation. The preparation procedure was applied to all individual particles. So, it can lead (and does lead) to shared behavior.

Nullstein said:
For example, if the measurement axes are not aligned, the correlation may be only 10%, so it can't be an individual property of the particles and just shows up in the statistics of the whole ensemble.
This argument makes no sense to me.
Nullstein said:
No I didn't forget that. I've shown you a recipe to construct a common cause event in the past given two events in the present under the assumption that they are linked by a non-local cause-and-effect relationship and the assumption that they had the chance to interact at some time in the past (as would be the case in a Big Bang scenario). This common cause satisfies the required conditional probability relations.
Because you say so?

Nullstein said:
Under the given assumptions, the existence of a non-local explanation implies the existence of a superdeterministic explanation.
Given that a "superdeterministic explanation" can be given to everything (which shows that it has nothing in common with an explanations) this is a triviality not worth to be mentioned.
Nullstein said:
A superdeterministic theory can make statistical predictions which can be falsified, just like any other theory. The situation is not worse than in any other hidden variable theory. We cannot falsify the claim that there are hidden variables, but we can draw conclusions from the theory and falsify it, if the predictions don't match the experiment.
No, no superdeterministic theory can make falsifiable predictions.

Of course, this depends on what one names a superdeterministic theory. A theory which claims that some experimenters have cheated and, instead of using really random preparation, used knowledge about some initial data has also a correlation between the experimenters decision and the initial data. If you name this a "superdeterministic theory", then, indeed, a "superdeterministic theory" can be a normal falsifiable theory. But in this case, there is a simple straightforward prescription how to falsify it: Use a different method of doing the experimenters decision. Say, add a pseudorandom number generator and use the number to modify the decision. If the effect remains unchanged, then this particular conspiracy theory is falsified. And given this possibility, for the discussion of Bell's inequality, theories of this type are obviously worthless, because the effect is already known to appear in very many different methods of making the experimenters choices.

For me, a superdeterministic theory requires more, namely that it goes beyond the simple "cheating experimenters" theory. That means, it has to explain a correlation between initial data and experimenters decisions for every prescription of how the experimenters prescription has been made. Which, for the start, includes a combination of the output of pseudorandom number generators, light sources from the opposite site of the universe, and some simple Geiger counter. Note also that the correlation should be always present, and have a large enough size, enough to allow the observed quantum violations of the BI.
 
  • Like
Likes mattt
  • #92
Non-locality is a minor problem
bhobba said:
As Peter keeps correctly pointing out, statements like the above depend on what you mean by locality.
No problem, in this case I talk about Bell locality.
bhobba said:
Also, it expresses a personal reaction that is different to scientific fact.
Of course, if I look at the facts I have presented and make a conclusion, this conclusion is a personal reaction. This does not make it non-objective or so. If I compute 2+2=4, this is also only my personal computation, I can make errors in my computations. So what?

That science, and in particular the theory of gravity together with its applications in astronomy, developed successfully in the time of non-local Newtonian gravity is a fact. So we have empirical evidence from history of science that science works nicely if based on non-local theories.

The point that non-local theories appear in a natural way as limits of local theories if the limiting velocity of causal influences goes to infinity is also a quite simple fact. That means, all one needs to have a non-local theory being viable is that that limiting velocity is too large to be measured.

Both points cannot be made for superdeterminism. Given that it claims that something assumed to be zero by normal causal theory can be nonzero prevents superdeterminism being a limit of normal causal theories. And my claim remains that superdeterminism, if taken seriously (instead of being ignored everywhere except in the particular case of the BI violations to get rid of the non-locality which would destroy relativistic metaphysics) would be the end of science.
bhobba said:
Bell is airtight, and its experimental confirmation is one of the outstanding achievements of 20th Century science, but how you view it does lead to subtleties.
I try to view it without following relativistic prejudices, that's all. But relativistic prejudices are quite strong feelings. Which is what has to be expected, given that those who study physics plausibly have been impressed by Einstein's argumentation for relativity already during their childhood. If they have not been impressed by this argumentation and liked it, they would hardly have decided to study physics. Such childhood impressions, if not questioned and rejected during the youth, tend to become dogma and create strong emotional reactions against the persons who question them. The readiness to question everything does not extend to relativistic metaphysics. Not strange if one takes into account that the readiness to question everything of common sense was learned together with relativistic metaphysics.

The relativistic argument itself is a valid one: There is a symmetry in the experiences, and the simplest way to explain it is to assume that it is a fundamental symmetry, a symmetry of reality. But this is not that strong. It is sufficient to present a derivation of relativistic symmetry for observables from a non-symmetric reality to meet it.
 
  • #93
Sunil said:
Those who are so simple and regular that a causal explanation can be found.
I think we have a different understanding about what that function is, so I'll give a more detailed account of its calculation.

We start with a system of N charges corresponding to a physically possible microscopic state of the whole experiment. We input this state in a simulation on some supercomputer. The simulation will output 8 symbols:

Ax, Ay, Az - the spin components (hidden variables) of particle A
Bx, By, Bz - the spin components (hidden variables) of particle B
D1 (orientation of detector 1 at the time of detection)
D2 (orientation of detector 2 at the time of detection)

So, an output like +++, --+,X,Y would correspond to an experimental run where the spins of particle A are UP,UP,UP in the X,Y and Z directions respectively, the spins of particle B are DOWN,DOWN,UP in the X,Y and Z directions respectively and the particles are measured on X (at detector D1) and on Y (at detector D2).

We repeat the simulation for a large number of initial states so that we can get a statistically representative sample. The initial states are chosen randomly, so no conspiracy or fine-tuning is involved. In the end we just count how many times a certain value of the hidden variables correspond to each detector settings. Our function will simply give you the probability of getting a certain hidden variable for a certain type of measurement.

As you can see, this function is fixed. It's just the statistical prediction of the theory (say classical EM) for this experiment. The function itself is trivially simple, but it is based on a huge amount of calculations used to solve those N-body EM problems. You CAN choose whatever initial state you want (random number generators and such) but you cannot mess with the function.

OK, so now we can understand what Bell's independence assumption amounts to. It posits that all probabilities must be the same. My question to you is simple. Why? I think it is entirely possible that those probabilities would not be the same. Some combinations could have 0 probability because, say, no initial state can evolve into them.

As far I can tell, assuming the probabilities must be the same is completely unjustified, or, using your words, is pure nonsense.

It's also nonsense to claim that if the probabilities come out differently we should abandon science. As you can see, superdeterminism is perfectly testable, you can get predictions from it and compare to experiment.
Sunil said:
Science depends on the possibility to make independent choices for experiments.
Explain me why the above test based on computer simulations where choices are not independent is not science.

Sunil said:
The classical Maxwell equations have a limiting speed, namely c. This is sufficient to prove the Bell inequalities for space-like separated measurements.
Again, classical EM involves long-range interactions. Only after the function is calculated you can see if the independence assumption is satisfied or not.

Sunil said:
You can use some value to define the actual orientation of the device so that this turning cannot causally influence the initial state.
You cannot. determinism implies that for a certain initial state the evolution is unique.

Sunil said:
The conspiracy is that this "related" translates into a correlation. This happens to be only in exceptionally simple circumstances - those circumstances where even we human beings are usually able to identify causal explanations.
a N body system where N is 10^26 or so is not simple. You need to solve the equations to get the predictions. Since the initial states are picked at random, there is no conspiracy involved.

Sunil said:
Not sure but plausible. But there are also all the gas particles in the air, and the atoms of the body of the guy who throws the coin.
We can rigorously prove that macroscopic neutral objects do not interact at the distance by calculating the Van der Waals forces between them. They are almost 0 when collisions are not involved. So, we can justify the independence assumption theoretically.

In the case of direct collisions (contact between the body and the coin) there is no independence, sure, and nobody would claim it is.

In the case of fluid mechanics we can simply rely on experiment. If the Navier–Stokes equations give good predictions we can deduce that the microscopic state of the fluid can be disregarded for that specific experiment. no need to make unjustified assumptions.
Sunil said:
One can, of course, reduce the possible outcomes to a discrete number of large subsets, and then it is possible that there will be only a single one, so no statistics involved. But these are exceptions, not the rule. Measurement errors are usually sufficient to force you to use statistics.
This is different. Sure, there are measurement errors and you can try to reduce them by repeating the experiment and use the average value. But this has nothing to do with the measurement settings being free parameters.

I want to measure the temperature of something. I put a thermometer there. Nobody cares if my decision to make that measurement at that time and in that specific way was free or not. The only important thing is that the report contains all those decisions so that others can reproduce the experiment and rely on its result.
 
Last edited:
  • #94
PeterDonis said:
I said no such thing. All I said was that QFT does not say anything about "causal relationships" at all.
Yes, but there is a principle of logic (the law of excluded middle) that says that the following statements:

P1. A caused B
P2. A did not cause B

cannot both be true.

What you are doing is oscillate between them, which is logically fallacious. If i say that EPR proves that you either need A to cause B or hidden variables you say that in there is no experimental distinction between A causing A and A not causing B. But this is irrelevant. One of them has to be false. Which one?
 
  • #95
AndreiB said:
there is a principle of logic (the law of excluded middle)
Which only applies if the binary concept being used is in fact well-defined for the domain being discussed. If "cause" is not well-defined (and so far, despite my repeated requests, you have not given a testable definition of "cause"), then sentences involving that concept are meaningless, so you can't apply logic using them.
 
  • Like
Likes bhobba
  • #96
PeterDonis said:
Which only applies if the binary concept being used is in fact well-defined for the domain being discussed. If "cause" is not well-defined (and so far, despite my repeated requests, you have not given a testable definition of "cause"), then sentences involving that concept are meaningless, so you can't apply logic using them.
Logic does not require all concepts to be experimentally testable. And there is nothing meaningless about the concept of cause. In the case of EPR, "A caused B" means that the spin measured at A (say UP) changed B from whatever state it was before (that includes no state at all) to a spin state of DOWN. So, you can replace the word "caused" to the word "changed". So, you need to choose between these options:

P1. The measurement result changed B from whatever it was before to a DOWN state.
P2. The measurement result did not change B from whatever it was before to a DOWN state.
 
  • #97
AndreiB said:
Logic does not require all concepts to be experimentally testable.
If you can't give a testable definition of "cause", then it's not well defined if you're trying to make general claims based on "cause".

You can, of course, define what "cause" means in a particular model you happen to prefer even if that definition doesn't make "cause" testable in your model. But that doesn't require me to accept your definition or your model. And you certainly can't use such a model to make general assertions.

AndreiB said:
you can replace the word "caused" to the word "changed".
Doesn't change anything about what I've said.
 
  • #98
DrChinese said:
1. In this paper*, there is no model presented that explains Bell correlations.
No, but he derives QM, so, implicitely, his model makes the same predictions as QM.

DrChinese said:
As always, I challenge anyone (and especially 't Hooft) to take the DrChinese challenge for their local realistic (or superdeterministic as the case may be) model. If there are values for various choices of measurement angles (which I choose, or think I choose), what are they for angle settings 0/120/240 degrees (entangled Type I PDC photon pairs)? The challenger provides the results, I pick the angle settings.
This is the point of superdeterminism. You CANNOT pick the settings. They are determined by the initial state of the system. What you can pick is the initial state of the whole experiment, after that you cannot touch the experiment.
DrChinese said:
According to the 't Hooft hypothesis, I will always pick pairs that yield the correct quantum expectation value.
Yes, because you are part of the universe (the CA) and you have to obey the rules of the CA. The decisions you make are already "there" in the past state of the CA.

DrChinese said:
How is it that, I sitting here at a remote keyboard, am forced to select angle pairs that come out to a 25% "apparent" match rate when the "true" match rate - according to local realism - is over 33%?
The "true match rate" you are speaking about is only expected when no interactions exist (like in the case of distant Newtonian billiard balls). If interactions exist, the "true match rate" has to be determined based on the initial state of the whole experiment, as explained in my post #93. So, you need to calculate the rate based on the CA rules, not based on Newtonian rigid-body mechanics. If 't Hooft's math is correct, the "true match rate" of his CA is the same as QM's one.

DrChinese said:
2. Clearly defined? Exactly how are Alice and Bob's apparently independent choice of settings tied to the measurement outcomes?
These are Alice's possible states: +-+-+-+-+- and -+-+-+-+-+ (you can think of + and - as representing the charge distribution of Alice, or CA states). Same charges do not like being close to each other, this is why we only have those two states.

These are the possible states of the hidden variable: +-, -+

In the no interaction case these experimental states are possible:

1. Alice: +-+-+-+-+- HV: +-
2. Alice: +-+-+-+-+- HV: -+
3. Alice: -+-+-+-+-+ HV: +-
4. Alice: -+-+-+-+-+ HV: -+

If there is interaction, states 2 and 3 are impossible (same charges are facing each other), so the statistics is different.
 
  • #99
PeterDonis said:
If you can't give a testable definition of "cause", then it's not well defined if you're trying to make general claims based on "cause".
I defined "cause". It means "change" in this context.
PeterDonis said:
You can, of course, define what "cause" means in a particular model you happen to prefer even if that definition doesn't make "cause" testable in your model. But that doesn't require me to accept your definition or your model. And you certainly can't use such a model to make general assertions.
I can certainly define what a state change means in QM. A Z-spin DOWN state will give a DOWN result on Z with 100% certainty. Any other state will not give a DOWN result on Z with 100% certainty. If at T1 you have a DOWN state and at T2 you have a different state, that qualifies as a change.

We know that, after the A measurement (UP), B is in a DOWN state. Was B in the same DOWN state before the measurement of A?
 
  • #100
AndreiB said:
This is the point of superdeterminism.
The point of "Superdeterminism" is simple: The initial conditions of the Universe were arranged that way that all measurements performed were and are consistent with the predictions of quantum mechanics.
 
  • Like
Likes EPR
Back
Top