I hope that this proposal, coming from a Nobel laureate for particle physics, will open the road to a more serious study of this class of theories, wrongly assumed to be ruled out.
Why do you say that? Bell's theorem is a theorem, meaning that it either holds or it doesn't. And if you prove it, then it does. It also follows that if you satisfy the assumptions that went into the proof, you cannot avoid it's consequences. The only way to sidestep a theorem is to sidestep it's assumptions, like showing that some of the assumptions need not always be satisfied for, say, interesting physical models. So what exactly was "wrong"? (up to now ?)
Absurd, all you have to do is read the paper. This is a shell game and does nothing to change Bell in any way.
From the paper:
1. "if Bell inequalities still apply to our system, are they indeed the
ones that are routinely being tested in many of today’s carefully designed experiments,
or are they formal features that have nothing to do with presently observable quantities?
This is the important question that we wish to address."
The Bell question is whether a local realistic theory can reproduce the predictions of QM. His paper tries to show that a computer program (cellular automata) can have local realistic features. OK, who cares? It does not reproduce QM statistics when Alice and Bob do their things. I.e. Fail. (The entire point of Bell was that ALL of the features of QM cannot be reproduced in a local realistic theory.)
2. "Thus, the entangled states are needed to describe our universe from day one. In any meaningful description of the statistical nature of our universe, one must assume it to have been in quantum entangled states from the very beginning. In contrast, the evolution law may always have been a deterministic one.... Clearly, the only way to handle quantum entangled states, is by assuming that these states were quantum-entangled right from day one in the universe. This, we think, should not be a problem at all..."
His assumption was that there is a set of initial conditions which survive to today. This is a rehash of his earlier paper essentially on superdeterminism. I do not consider superdeterminism to be science. It is strictly ad hoc and borders on the religious.
The upshot: yes, 't Hooft is a great. But this paper changes nothing about Bell's Theorem's conclusions/import, and will only be given attention because of his name. I have no idea why he has taken on this strange crusade when there are plenty of other theoretical issues out there that could use his attention.
Another way to see the problem with this paper:
"Clearly, the only way to handle quantum entangled states, is by assuming that these
states were quantum-entangled right from day one in the universe. This, we think, should not be a problem at all."
If there is information encoded in the model from the beginning of the universe, as he states, then why don't EVERY pair of photons exhibit Bell correlations? In fact, the only pairs that do are ones created from entangled photons sources such as PDC crystals. I.e. ones that were born just in the last few seconds. That stands directly in contradiction to his hypothesis.
So now he is trying to say (my paraphrase): "The observers were once in causal contact (and their choices were somehow constrained), so that is why Bell inequalities are violated with these photons. Oh, but I propose no explanation as to how this occurs (or even relates to the cos^2 function), I just wave my hand and say it is possible. And I cannot explain why the entangled nature of the entire universe doesn't manifest itself otherwise."
As I have said many times previously: show me how space-like separated observers - using radioactive decay to select polarizer settings - can have initial conditions which cause those choices to themselves be pre-determined. That is an absolute requirement of any superdeterministic theory, and yet would absolutely violate the Standard Model and experiment regarding the random nature of those samples. As well as violating any concept of what science is.
(You know, maybe Santa Claus exists too. It is still NOT science, but 't Hooft can't prove me wrong on that!)
't Hooft is simply arguing that working on determinstic models is not a priori a fruitless effort. He does claim that it is possible to derive a bona fide quantum theory from a determinsitic theory without any handwaving.
His arguments about superdeterminism is simply to explain the apparent contradiction with the fact that Bell's inequalities are violated.
A particular assumption involved in Bell's theorem, namely the statistical independence assumption is almost never questioned, although there are good reasons to do so. On the other hand the so-called "realism assumption" and locality are often questioned. This different treatment is wrong IMO.
No, as DrFaustus correctly pointed out, Bell's theorem is only relevant for those models that satisfy its assumptions. 't Hooft's model does not satisfy the statistical independence assumption therefore Bell's theorem is irrelevant. His model (the universe as a cellular automaton) has a lot of problems, like achieving Lorentz invariance, but Bell's theorem is not one of them.
Read Bell again. He speaks about the option 't Hooft chooses to research and does acknowledge that it is possible. I have repeatedly given you the exact quote but you have chosen to ignore it, continuing to repeat your mantra that "no local-realistic... blah blah".
Your personal opinion about superdeterminism is simply irrelevant in the absence of some justification. You have presented no justification for the claim that 't Hooft's model is not science but religion.
Of course Bell's Theorem's conclusions are not changed in any way and nobody tries to do that. It is your misunderstandings about what those conclusions are that need to change.
I'm waiting for 't Hooft to come up with a paper about the Leggett experiments conducted by the Gisin and Zeilinger groups with the full involvement of Leggett (Dr. Macrorealism) himself. Maybe this was just a warm up for something along those lines. If not, why not?
Couple of interesting papers by Charles Tessler suggest that you don't even need the locality assumption for Bell. You can Occamize it out. It's all about realism.
I tried to find some of those Tessler papers, but had no luck. Can you point me in a direction?
1. Of course Bell is an obstacle for ANY physical theory that covers the same ground as QM. Any approach that hopes to eventually cover that ground is also affected. Now 't Hooft can choose to delay the day of reckoning, or not. But it will come and Bell will come into play. No local realistic physical theory can provide the same predictions as QM. That includes theories of cellular automata. Really, how is that hard to understand?
2. Bell does not mention superdeterminism in his original paper, and was not a believer. As to his mentioning it in subsequent discussions, again, so what?
3. My opinion - ha - is simply that superdeterminism = Santa Claus. I am sure there are plenty of readers here who agree that there is NOT ONE iota of evidence for it, NOT ONE hint of it, and it smacks of grasping at straws.
4. My understanding is the same as the mainstream scientific community: local realism is excluded as inconsistent with QM, which is experimentally verified. You are the one on the outside looking in.
But it really isn't possible unless you can overcome some obstacles. Because of Bell's Theorem, we now know that the results of Bell tests must be explained for the idea to have merit. So we must add a new assumption to those of locality (L) and realism (R), the idea that initial conditions (ICU) of the universe are accessible.
So the issue is about ICU: is this a provable or disprovable hypothesis? And the answer is YES, such an assumption has definite baggage associated with it that 't Hooft must be aware of but choose not to address. So I will say it:
1. Why do ONLY entangled particle pairs display this behavior? I would expect it to appear everywhere! After all, the hypothesis is that the choice of Alice and Bob's measurement setting was superdetermined by the ICU.
2. Suppose I have Alice and Bob's measurement settings chosen based on an algorithm tied to radioactive decay of a sample of uranium. That involves the weak force. The settings for Alice and Bob should appear random. We expect that the Bell Inequalities will be violated still, correct?
Well if they are, we now have to postulate as to the mechanism by which the ICU are able to communicate and express themselves via the radioactive sample, then run through the algorithm to ultimately select the correct pair of settings for Alice and Bob so that the local realistic observations at Alice and Bob can correlate. A tall order indeed, considering none of this is part of the Standard Model!
But the funny thing is, I can change the algorithms any way I want - I can reverse the results at Alice for example - or otherwise bias the results some way. And yet regardless of the algorithm I choose, I expect the results to violate a Bell Inequality! And that remains true even if the settings at Alice and Bob are left static.
So without some explanation of how I can play these games with the choice of Alice and Bob's settings, I don't see how ICU can be taken seriously as an assumption. And please, don't tell me that it cannot be ruled out as my point is that it can - if an ACTUAL mechanism were given to us to consider. (But 't Hooft did not do that - he showed us something else entirely.)
That could be because his name isn't Charles Tessler. It's Charles Tresser.
I'm the Concept guy, see. Generally I leave the details to my serfs.
Great, I like the direction of these papers. Please give your serfs my thanks.
The correct formulation is No local realistic and "free" physical theory can provide the same predictions as QM. By "free" I understand a theory that admits that the measurements directions can be freely chosen. The cellular automata model does not admit this. Why do you find this so hard to understand?
- Gravity works as described by GR AND the measurements can be freely chosen...
- The periodic table drives chemical reactions AND the measurements can be freely chosen...
- Particle experiments work per the Standard Model AND the measurements can be freely chosen...
Do you notice that "freely chosen" can be attached to ANY scientific theory? Somehow makes that phrase completely meaningless. Which is why it is NOT science. Why do you find this hard to understand?
There is nothing about the free choice of the experimenter that is ANY different for Bell Tests than any OTHER scientific experiment. Only desperation would lead someone to single this area out. (In my opinion, of course.) Every experiment about local realism shows the same results, without exception: QM cannot be local realistic. That is not just my opinion, that is the result of thousands of experiments combined with an important and well proven theorem.
I'd guess that would be because of a chain of events that he has no control over. If his theory is right. But if his theory is right, it would make no sense in asking why you don't accept his ideas either.
All 't Hooft has to do is derive quantum mechanics, he doesn't really need to explain himself how it is possible that Bell's inequalities are violated. What 't Hooft is doing is to simply defend persuing working on these theories despite an apparent no-go theorem. What he is writing about this is not "his theory".
To see that what 't Hooft is doing is possible in principle, consider a classical computer that simulates the ordinary quantum evolution laws. The degrees of freedom that describe the computational states of the computer are being manipulated in a local deterministic way. Yet, the computer is able to give rise to a virtual world that is described by the usual quantum mechanical laws.
There is a no-go as you say. And clearly his paper cannot describe quantum mechanical events such as spin at this time. To do that, he needs to model the cos^2 theta relationship which is absent. Also the HUP, similarly absent. These are the sticking points as we know from Bell/EPR.
And significantly for this type of model: he needs there to be "perfect" correlations! In other words, when theta=0 there needs to be perfect agreement. I fail to see how a cellular automata program is going to produce a random sequence plus perfect correlation when the entangled particles can be measured at different points in time (and therefore at different stages of their evolution).
Clearly, his paper is an early dry run attempt to get the matter on the table. But I strongly disagree with any idea that this can be considered science. If he can make something of this, great, but as far as I am concerned this is as much science as:
a) A theory of perpetual motion machines (since we already have a no-go for that too)
b) Santa Claus
Even if such an idea could be proved, it would be completely meaningless as we would have no better formalism than QM when done. (Since the entire purpose of this exercise would be to recreate the predictions of QM.) Looking at the title of this thread, it is clear that we do not have - in this paper - a LR theory that violates Bell.
1. "periodic table" and the Standard Model are both QM. It is true that both QM and GR are free, so what? The freedom assumption is not necessary for them to work, they are "agnostic" about it. Were the experimenters not free, the two theories will continue to function very well. So, clearly, the denial of "freedom" does not go against any of them. I fail to see what the substance of your argument is. Do you want to say that a new idea is unscientific just because it was not a part of some other theory, or what?
2. GR is "free" only because it is limited in scope to gravity. Imagine a universe populated only by black-holes. Ignoring the Hawking radiation, this would be an example of a universe described only by GR. Let's take two groups of about 10^26 black holes and call them "Alice" and "Bob". Do you think that the two groups are "free" or not?
Yes, the experimenter is either free or not. If it is not free, then it is not free in any experiment, not only in EPR ones.
This area is singled out because it is the only one that forces one to choose between freedom, locality and realism. In fact, I can easily change your argument to go against non-locality or non-realism. You seem to forget that just like in the case of freedom, those two assumptions are also assumed in other branches of science. A medic, astronomer or chemist do not ponder if some non-local signal alters theirs experiments or that the subject of their experiment does not really exists. So, if you want to follow your argument to all its logical consequences you should reject not only superdeterminism but non-realism and non-locality as well, leaving you with the "god dit it", all encompassing solution.
Well, there is a Nobel-price laureate in particle physics that disagrees with you on this matter. There is also a scientist, John Bell who also disagrees with you (an entire chapter in its book is dedicated to the freedom hypothesis). So, I think I am in good company.
Separate names with a comma.