I hope that this proposal, coming from a Nobel laureate for particle physics, will open the road to a more serious study of this class of theories, wrongly assumed to be ruled out.
Why do you say that? Bell's theorem is a theorem, meaning that it either holds or it doesn't. And if you prove it, then it does. It also follows that if you satisfy the assumptions that went into the proof, you cannot avoid it's consequences. The only way to sidestep a theorem is to sidestep it's assumptions, like showing that some of the assumptions need not always be satisfied for, say, interesting physical models. So what exactly was "wrong"? (up to now ?)
Absurd, all you have to do is read the paper. This is a shell game and does nothing to change Bell in any way.
From the paper:
1. "if Bell inequalities still apply to our system, are they indeed the
ones that are routinely being tested in many of today’s carefully designed experiments,
or are they formal features that have nothing to do with presently observable quantities?
This is the important question that we wish to address."
The Bell question is whether a local realistic theory can reproduce the predictions of QM. His paper tries to show that a computer program (cellular automata) can have local realistic features. OK, who cares? It does not reproduce QM statistics when Alice and Bob do their things. I.e. Fail. (The entire point of Bell was that ALL of the features of QM cannot be reproduced in a local realistic theory.)
2. "Thus, the entangled states are needed to describe our universe from day one. In any meaningful description of the statistical nature of our universe, one must assume it to have been in quantum entangled states from the very beginning. In contrast, the evolution law may always have been a deterministic one.... Clearly, the only way to handle quantum entangled states, is by assuming that these states were quantum-entangled right from day one in the universe. This, we think, should not be a problem at all..."
His assumption was that there is a set of initial conditions which survive to today. This is a rehash of his earlier paper essentially on superdeterminism. I do not consider superdeterminism to be science. It is strictly ad hoc and borders on the religious.
The upshot: yes, 't Hooft is a great. But this paper changes nothing about Bell's Theorem's conclusions/import, and will only be given attention because of his name. I have no idea why he has taken on this strange crusade when there are plenty of other theoretical issues out there that could use his attention.
Another way to see the problem with this paper:
"Clearly, the only way to handle quantum entangled states, is by assuming that these
states were quantum-entangled right from day one in the universe. This, we think, should not be a problem at all."
If there is information encoded in the model from the beginning of the universe, as he states, then why don't EVERY pair of photons exhibit Bell correlations? In fact, the only pairs that do are ones created from entangled photons sources such as PDC crystals. I.e. ones that were born just in the last few seconds. That stands directly in contradiction to his hypothesis.
So now he is trying to say (my paraphrase): "The observers were once in causal contact (and their choices were somehow constrained), so that is why Bell inequalities are violated with these photons. Oh, but I propose no explanation as to how this occurs (or even relates to the cos^2 function), I just wave my hand and say it is possible. And I cannot explain why the entangled nature of the entire universe doesn't manifest itself otherwise."
As I have said many times previously: show me how space-like separated observers - using radioactive decay to select polarizer settings - can have initial conditions which cause those choices to themselves be pre-determined. That is an absolute requirement of any superdeterministic theory, and yet would absolutely violate the Standard Model and experiment regarding the random nature of those samples. As well as violating any concept of what science is.
(You know, maybe Santa Claus exists too. It is still NOT science, but 't Hooft can't prove me wrong on that!)
't Hooft is simply arguing that working on determinstic models is not a priori a fruitless effort. He does claim that it is possible to derive a bona fide quantum theory from a determinsitic theory without any handwaving.
His arguments about superdeterminism is simply to explain the apparent contradiction with the fact that Bell's inequalities are violated.
A particular assumption involved in Bell's theorem, namely the statistical independence assumption is almost never questioned, although there are good reasons to do so. On the other hand the so-called "realism assumption" and locality are often questioned. This different treatment is wrong IMO.
No, as DrFaustus correctly pointed out, Bell's theorem is only relevant for those models that satisfy its assumptions. 't Hooft's model does not satisfy the statistical independence assumption therefore Bell's theorem is irrelevant. His model (the universe as a cellular automaton) has a lot of problems, like achieving Lorentz invariance, but Bell's theorem is not one of them.
Read Bell again. He speaks about the option 't Hooft chooses to research and does acknowledge that it is possible. I have repeatedly given you the exact quote but you have chosen to ignore it, continuing to repeat your mantra that "no local-realistic... blah blah".
Your personal opinion about superdeterminism is simply irrelevant in the absence of some justification. You have presented no justification for the claim that 't Hooft's model is not science but religion.
Of course Bell's Theorem's conclusions are not changed in any way and nobody tries to do that. It is your misunderstandings about what those conclusions are that need to change.
I'm waiting for 't Hooft to come up with a paper about the Leggett experiments conducted by the Gisin and Zeilinger groups with the full involvement of Leggett (Dr. Macrorealism) himself. Maybe this was just a warm up for something along those lines. If not, why not?
Couple of interesting papers by Charles Tessler suggest that you don't even need the locality assumption for Bell. You can Occamize it out. It's all about realism.
I tried to find some of those Tessler papers, but had no luck. Can you point me in a direction?
1. Of course Bell is an obstacle for ANY physical theory that covers the same ground as QM. Any approach that hopes to eventually cover that ground is also affected. Now 't Hooft can choose to delay the day of reckoning, or not. But it will come and Bell will come into play. No local realistic physical theory can provide the same predictions as QM. That includes theories of cellular automata. Really, how is that hard to understand?
2. Bell does not mention superdeterminism in his original paper, and was not a believer. As to his mentioning it in subsequent discussions, again, so what?
3. My opinion - ha - is simply that superdeterminism = Santa Claus. I am sure there are plenty of readers here who agree that there is NOT ONE iota of evidence for it, NOT ONE hint of it, and it smacks of grasping at straws.
4. My understanding is the same as the mainstream scientific community: local realism is excluded as inconsistent with QM, which is experimentally verified. You are the one on the outside looking in.
But it really isn't possible unless you can overcome some obstacles. Because of Bell's Theorem, we now know that the results of Bell tests must be explained for the idea to have merit. So we must add a new assumption to those of locality (L) and realism (R), the idea that initial conditions (ICU) of the universe are accessible.
So the issue is about ICU: is this a provable or disprovable hypothesis? And the answer is YES, such an assumption has definite baggage associated with it that 't Hooft must be aware of but choose not to address. So I will say it:
1. Why do ONLY entangled particle pairs display this behavior? I would expect it to appear everywhere! After all, the hypothesis is that the choice of Alice and Bob's measurement setting was superdetermined by the ICU.
2. Suppose I have Alice and Bob's measurement settings chosen based on an algorithm tied to radioactive decay of a sample of uranium. That involves the weak force. The settings for Alice and Bob should appear random. We expect that the Bell Inequalities will be violated still, correct?
Well if they are, we now have to postulate as to the mechanism by which the ICU are able to communicate and express themselves via the radioactive sample, then run through the algorithm to ultimately select the correct pair of settings for Alice and Bob so that the local realistic observations at Alice and Bob can correlate. A tall order indeed, considering none of this is part of the Standard Model!
But the funny thing is, I can change the algorithms any way I want - I can reverse the results at Alice for example - or otherwise bias the results some way. And yet regardless of the algorithm I choose, I expect the results to violate a Bell Inequality! And that remains true even if the settings at Alice and Bob are left static.
So without some explanation of how I can play these games with the choice of Alice and Bob's settings, I don't see how ICU can be taken seriously as an assumption. And please, don't tell me that it cannot be ruled out as my point is that it can - if an ACTUAL mechanism were given to us to consider. (But 't Hooft did not do that - he showed us something else entirely.)
That could be because his name isn't Charles Tessler. It's Charles Tresser.
I'm the Concept guy, see. Generally I leave the details to my serfs.
Great, I like the direction of these papers. Please give your serfs my thanks.
The correct formulation is No local realistic and "free" physical theory can provide the same predictions as QM. By "free" I understand a theory that admits that the measurements directions can be freely chosen. The cellular automata model does not admit this. Why do you find this so hard to understand?
- Gravity works as described by GR AND the measurements can be freely chosen...
- The periodic table drives chemical reactions AND the measurements can be freely chosen...
- Particle experiments work per the Standard Model AND the measurements can be freely chosen...
Do you notice that "freely chosen" can be attached to ANY scientific theory? Somehow makes that phrase completely meaningless. Which is why it is NOT science. Why do you find this hard to understand?
There is nothing about the free choice of the experimenter that is ANY different for Bell Tests than any OTHER scientific experiment. Only desperation would lead someone to single this area out. (In my opinion, of course.) Every experiment about local realism shows the same results, without exception: QM cannot be local realistic. That is not just my opinion, that is the result of thousands of experiments combined with an important and well proven theorem.
I'd guess that would be because of a chain of events that he has no control over. If his theory is right. But if his theory is right, it would make no sense in asking why you don't accept his ideas either.
All 't Hooft has to do is derive quantum mechanics, he doesn't really need to explain himself how it is possible that Bell's inequalities are violated. What 't Hooft is doing is to simply defend persuing working on these theories despite an apparent no-go theorem. What he is writing about this is not "his theory".
To see that what 't Hooft is doing is possible in principle, consider a classical computer that simulates the ordinary quantum evolution laws. The degrees of freedom that describe the computational states of the computer are being manipulated in a local deterministic way. Yet, the computer is able to give rise to a virtual world that is described by the usual quantum mechanical laws.
There is a no-go as you say. And clearly his paper cannot describe quantum mechanical events such as spin at this time. To do that, he needs to model the cos^2 theta relationship which is absent. Also the HUP, similarly absent. These are the sticking points as we know from Bell/EPR.
And significantly for this type of model: he needs there to be "perfect" correlations! In other words, when theta=0 there needs to be perfect agreement. I fail to see how a cellular automata program is going to produce a random sequence plus perfect correlation when the entangled particles can be measured at different points in time (and therefore at different stages of their evolution).
Clearly, his paper is an early dry run attempt to get the matter on the table. But I strongly disagree with any idea that this can be considered science. If he can make something of this, great, but as far as I am concerned this is as much science as:
a) A theory of perpetual motion machines (since we already have a no-go for that too)
b) Santa Claus
Even if such an idea could be proved, it would be completely meaningless as we would have no better formalism than QM when done. (Since the entire purpose of this exercise would be to recreate the predictions of QM.) Looking at the title of this thread, it is clear that we do not have - in this paper - a LR theory that violates Bell.
1. "periodic table" and the Standard Model are both QM. It is true that both QM and GR are free, so what? The freedom assumption is not necessary for them to work, they are "agnostic" about it. Were the experimenters not free, the two theories will continue to function very well. So, clearly, the denial of "freedom" does not go against any of them. I fail to see what the substance of your argument is. Do you want to say that a new idea is unscientific just because it was not a part of some other theory, or what?
2. GR is "free" only because it is limited in scope to gravity. Imagine a universe populated only by black-holes. Ignoring the Hawking radiation, this would be an example of a universe described only by GR. Let's take two groups of about 10^26 black holes and call them "Alice" and "Bob". Do you think that the two groups are "free" or not?
Yes, the experimenter is either free or not. If it is not free, then it is not free in any experiment, not only in EPR ones.
This area is singled out because it is the only one that forces one to choose between freedom, locality and realism. In fact, I can easily change your argument to go against non-locality or non-realism. You seem to forget that just like in the case of freedom, those two assumptions are also assumed in other branches of science. A medic, astronomer or chemist do not ponder if some non-local signal alters theirs experiments or that the subject of their experiment does not really exists. So, if you want to follow your argument to all its logical consequences you should reject not only superdeterminism but non-realism and non-locality as well, leaving you with the "god dit it", all encompassing solution.
Well, there is a Nobel-price laureate in particle physics that disagrees with you on this matter. There is also a scientist, John Bell who also disagrees with you (an entire chapter in its book is dedicated to the freedom hypothesis). So, I think I am in good company.
Hmmm, that chapter is 4 pages long and ends with this:
A theory may appear in which such conspiracies inevitably occur, and these conspiracies may then seem more digestible than the non-localities of other theories. When that theory is announced I will not refuse to listen, either on methodological or other grounds. But I will not myself try to make such a theory.
I would say my point of view quite agrees with Bell here: a) if a theory appears, I will listen; b) there is no theory to consider at this time. Certainly 't Hooft's is not even close to being a theory (more of an idea for an abstract), and I seriously doubt there EVER will be. Why would I make such a "bold" statement? Because there is a monumental leap required for such a theory as I have previously pointed out.
An attempt to quantify that leap is detailed in this paper: Experimenter’s freedom in Bell’s theorem and quantum cryptography by Kofler, Paterek and Brukner (2005). They conclude:
"The violation of Bell’s inequalities is an experimental fact. Within a local realistic program this fact can only be explained if the experimenter’s freedom in choosing between different measurement settings is denied modulo known loopholes, considered by most scientists to be of technical nature. For a local realist our results show that both the number of settings in which the freedom is abandoned grows exponentially and the degree of this abandonment saturates exponentially fast with the number of parties. For the present authors, however, these results are rather an indication of the absurdity of the program itself."
Elsewhere, Brukner refers to the superdeterministic concept as "grotesque" and uses an idea loosely akin to my counterexample involving radioactive samples:
"Grotesque: The experimenters‘ choice could be generated by the parity of the number of cars passing the laboratory within n seconds, where n is given by the fourth decimal of the cube of the actual temperature in degrees Fahrenheit …"
So as far as I can see, the question about superdeterminism has been asked and answered in the negative by Bell, Brukner and others.
I don't think either of us will change the other's mind at this point. I think your posts would work better if you identified the non-standard science as being your opinion, rather than trying to trick others who don't know any better.
It hasn't. They have only argued why it looks implausible. 't Hooft has clearly explained why these arguments, while looking convincing, do not apply to deterministic theories.
So, the argument:
makes no sense as a rigorous argument, because the there is no such thing as an "experimenter's choice" in the first place.
If you attempt to make the argument more rigorous, you have to assume that some set of initial conditions exists from which you can freely choose. By evolving these states forward, you get the possible states the universe is in now. But that then automatically induces correlations between everything, because the entropy of the current universe is much larger than the entropy of early universe. So only an astronomically small fraction the microstates corresponding to the curent macrostate are possible states the universe can be in.
Then a correlation function that you can masure in an experiment is defined as an appropriate ensemble average. If you consider a different correlation function, you will automatically average over a different ensemble. Bell's inequalities then do not apply here.
That is circular, you are assuming that which you want to prove. I.e. you are assuming strict determinism.
And anyway, even if you assume strict determinism, that doesn't explain why the experimenter's choices "conspire" to violate a Bell Inequality. That is what Brukner is saying. Essentially, I could invent hundreds of bizarre algorithms for how settings are selected and all would need to be explained by a local realistic theory.
For example: you could say that the tossing of a die by Alice is deterministic and IF you knew all the initial conditions precisely, you would know what number would come up. Fine. Now Bob tosses his die with the same rules. And yet, there is no correlation. Try it, you will see for yourself that no matter how many times you roll the dice, there will be no correlation. And yet entangled particles alone display this correlation. Hmmm. What does that tell us? That entangled particles and ONLY entangled particles will show this.
I strongly disagree. If you interpret these theories as being COMPLETE, i.e., if you assume that these theories describe EVERYTHING, including the behavior of experimentalists themselves, then measurements CANNOT be freely chosen. Instead, the measurements themselves are determined by these physical laws.
Conversely, if you assume that experiments CAN be freely chosen, then you cannot avoid the conclusion that the physical laws we know are not complete. There is something (the free will) that cannot be described by them. In this case, there is a free will theorem that says that if humans have free will, then quantum particles have free will as well.
Concerning the Bell theorem, one of its ASSUMPTIONS is ability to choose experiments at will. If you give up that assumption, as 't Hooft does, then it IS possible to have a local hidden variable compatible with QM. In fact, Bohmian mechanics can also be reformulated in this way, as a local superdeterministic theory. The problem with such a reformulation is that Bohmian mechanics then looks much more complicated and artificial, but it is possible to do that.
I disagree with your disagreement. You are discussing the meaning of the word "complete" which as you know is full of multiple meanings. I prefer to avoid that debate.
1. Clearly, the issue is not really free will. That actually has no meaning at all the debate per se. The true question: is the subensemble of detected events representative of the universe of possible measurements? And I am saying that logically, one could assert that with any physical theory, it is not. There is nothing special about QM in particular that makes this an issue. But is that assertion scientific at any time?
I consider it axiomatic that science must necessarily involve sampling, and that all physical laws are attempts to develop patterns of behavior that have organizational or predictive value. I fail to see how the "anti-fair sampling hypothesis" has any meaning in this context.
On the other hand, you could easily assert that even in experiments such as Rowe, fair sampling is not strictly ruled out anyway! Sure, you look at all events that are sampled. But of course that is merely a subsample too. Perhaps there was still a conspiracy to force the experimenter to think they chose the time and place of the experiment, and their settings at that time, when in reality these were not freely chosen. I.e. there is no free will. And yet that wouldn't matter unless the subensemble of events is not representative of the entire universe of events. (Don't get me wrong, I am not arguing that closing the fair sampling loophope is not an important achievement: it certainly shows there is no obvious mechanism causing bias in non-detection.)
I conclude: (free will)=false does not imply (detected subensemble)=not representative. I.e. superdeterminism does not imply anti-fair sampling assumption any more than it implies the fair sampling assumption. You could have (free will)=false AND (detected subensemble)=representative just as easily. And in my opinion, fair sampling hypothesis is an axiom of science. It is necessary because that justifies the use of physical laws to predict the outcome of experiments. On the other hand, if this assumption were not justified, we could not use physical laws to predict the outcomes of experiments.
2. A local realist may attempt to use the superdeterminism argument to justify their stance. They are really saying is that:
a) The world is local realistic (both reasonable ideas I quite agree);
b) Bell's Theorem usual conclusion (QM incompatible with LR) is OK, but QM is wrong;
c) Specifically QM's cos^2 theta rule for correlations is wrong, to avoid Bell;
b) that experiments in support of that rule are also wrong because fair sampling is wrong;
c) superdeterminism is the mechanism that allows fair sampling to be violated.
But I say that is unscientific per above. For their argument to make sense (since superdeterminism does not imply anti-fair sampling hypothesis), the local realist needs to advance a superdeterministic theory that violates fair sampling AND agrees with QM predictively on the matter of the cos^2 rule. Certainly 't Hooft is not supplying this, nor is he claiming to. He is simply trying to show a path which might be explored to produce such.
The cos^2 rule IS useful, and its utility justifies it. (Sort of like the argument that a history of the sun rising every day does not prove the sun will rise tomorrow.) So even at the end of the day, if 't Hooft got his wish and produced such a theory, we are guaranteed no more predictive power than what we have today. Rather seems pointless to me. As I read Bell's words, I believe he fully concluded the same: "although there is an escape route there, it is hard for me to believe that quantum mechanics works so nicely for inefficient practical set-ups, and is yet going to fail badly when sufficient refinements are made."
3. As to Bohmian perspective: why would you need to assert the anti-fair sampling hypothesis when the non-local nature provides the explicit mechanism to explain entanglement correlations?
Separate names with a comma.