I Why all the rejection of superdeterminism?

  • #51
Demystifier said:
Can you make an exact quote of Bell?
In [1] (listed below), Bell clearly specifies his assumptions in response to critique of his original formulation (italics are his):
J.S. Bell said:
'It has been assumed that the settings of instruments are in some sense free variables ...' For me this means that the values of such variables have implications only in their future light cones. They are in no sense a record of, and do not give information about what has gone before.
In a deterministic universe, this assumption is obviously violated. For it means that these values are independent of the values of all observables prior to the moment the settings are made. Indeed, if they would depend on the latter, fixing one of the values provides a nontrivial relation on the prior variables in their past light cone and hence provides information about the latter. Thus they give information about what was before, in direct contradiction to Bell's assumptions.

Note that no extra ingredient allegedly distinguishing superdeterminism from determinism is necessary to arrive at this contradiction.

In addition, Bells assumption is intuitively unreasonable. Any setting created by a human experimenter is obviously determined by the latter's intentions, which precede the setting and hence violate Bell's assumptions. And any setting created by a computer-driven automatic device is obviously determined by the latter's program, which precede the setting and hence violate Bell's assumptions, too.

Thus the logic of Bell's argument requires a nondeterministic universe.

  1. J. S. Bell, Free variables and local causality, Epistemological Letters, Feb. 1977. Reprinted as Chapter 12 of J. S. Bell, Speakable and Unspeakable in Quantum Mechanics (Cambridge University Press 1987)
 
Last edited:
Physics news on Phys.org
  • #52
DrChinese said:
But I have no idea why you would pick this position to defend, given your usual attention to key points in Physics.
Because I like to distinguish between clear logic and wishful thinking.
 
  • #53
A. Neumaier said:
... the logic of Bell's argument requires a nondeterministic universe.

It most certainly does not, and none of your reference implies as much. The fair sampling hypothesis works whether the universe is or is not deterministic. It is only by violation of the fair sampling hypothesis - and in a very specific way - that superdeterminism speculatively operates. Bell never said otherwise, as he clearly assumed that the reader would fill that point into the extent it was not stated.

Further, this entire line of reasoning is moot anyway as there are other (non-Bell) tests in which there is no statistical component and/or there is no observer choice. With GHZ, for example, a single example will produce results in contradiction to local realism. Superdeterminism does NOT rescue that.

[edited to add:] Where does your "clear logic" stand on this? :smile:
 
  • Like
Likes Demystifier
  • #54
A. Neumaier said:
Because I like to distinguish between clear logic and wishful thinking.

That's the kind of comment that I was looking for! :smile:

I feel better already.

-DrC
 
  • #55
DrChinese said:
With GHZ, for example, a single example will produce results in contradiction to local realism.
True. But this has nothing to do with the present thread, which is, according to the OP, neither about local realism nor about which things can be proved with or without superdeterminism.
 
  • #56
DrChinese said:
It most certainly does not, and none of your reference implies as much.
I claimed that ''the logic of Bell's argument requires a nondeterministic universe'', and this is true for the argument under discussion, viewed on purely logical grounds.

On the other hand, I agree that Bell's assumptions are reasonable from a practical point of view, and in the paper I referred to he didn't want to insist on more. This is enough for me to rule out local realism in Bell's sense.

However, I still believe that Nature is deterministic and we'll discover one day a realistic description. But it will neither be one that would be local realistic in Bell's sense, nor one that is superdeterministic like Bohmian mechanics where an extraordinary fine-tuning of the actual universe is needed so that our realization of the universe behaves precisely according to the assumed quantum equilibrium hypothesis.
 
  • #57
I feel like I'm arguing a two-front war here. On the one hand, I don't think that it's impossible to have a superdeterministic explanation for QM statistics. On the other hand, I think that such a theory would be very bizarre, and nothing like any theory we've seen so far.

Let me go through a stylized description of an EPR-like experiment so that we can see where the superdeterminism loophole comes in.

We have a game with three players, Alice, Bob and Charlie. Alice and Bob are in different rooms, and can't communicate. In each room, there are three light bulbs colored Red, Yellow and Blue, which can be turned off or on.

The game consists of many many rounds, where each round has the following steps:
  1. Initially, all the lights are off.
  2. Charlie creates a pair of messages, one to be sent to Alice and one to be sent to Bob.
  3. After Charlie creates his messages, but before they arrive, Alice and Bob each choose a color, Red, Yellow or Blue. They can use whatever criterion they like for choosing their respective colors.
  4. When Alice receives her message, she follows the instructions to decide whether to turn on her chosen light, or not. Bob similarly follows his instructions.
After playing the game for many, many rounds, the statistics are:
  • When Alice and Bob choose the same color, they always do the opposite: If Alice's light is turned on, Bob's is turned off, and vice-versa.
  • When Alice and Bob choose different colors, they do the same thing 3/4 of the time, and do the opposite thing 1/4 of the time.
The question is: What instructions could Charlie have given to Alice and Bob to achieve these results? The answer, proved by Bell's theorem, is that there is no way to guarantee those results, regardless of how clever Charlie is, provided that
  1. Charlie doesn't know ahead of time what colors Alice and Bob will choose.
  2. Alice has no way of knowing what's going on in Bob's room, and vice-versa.
The superdeterministic loophole

If Charlie does know what choices Alice and Bob will make, then it's easy for him to achieve the desired statistics:
  • Every round, he randomly (50/50 chance) sends either the message to Alice: "turn your light on", or "turn your light off"
  • If Alice and Bob are predestined to choose the same color, then Charlie sends the opposite message to Bob.
  • If Alice and Bob are predestined to choose different colors, then Charlie will send Bob the same message 3/4 of the time, and the opposite message 1/4 of the time.
Why the superdeterministic loophole is implausible

The reason that the superdeterministic loophole is not possible is because Alice and Bob can choose any mechanism they like to help them decide what color to use. Alice might look up at the sky, and choose the color based on how many shooting stars she sees. Bob might listen to the radio and make his decision based on the scores of the soccer game. For Charlie to be able to predict what Alice and Bob will choose can potentially involving everything that can possibly happen to Alice and Bob during the course of a round of the game. The amount of information that Charlie would have to take into account would be truly astronomical. The processing power would be comparable to the power required to accurately simulate the entire universe.

Why I think the superdeterministic loophole is actually impossible

What the superdeterministic loophole amounts to is that somehow Charlie has information about the initial state (before the game began) of the universe, s_0, and somehow he has a pair of algorithms, \alpha(s_0) and \beta(s_0) that predict the choice of Alice and Bob as a function of the initial state. The problem is that even if there were such algorithms, they computational time for computing the result would be greater than just waiting to see what Alice and Bob choose. So Charlie couldn't possibly know the results in time to choose his instructions to take those results into account.

Why not? Remember, we're allowing Alice and Bob to use whatever mechanism they like to decide what color to pick. So suppose Alice picks the same algorithm, \alpha, and chooses whatever color is NOT returned by \alpha(s_0)? In other words, she runs the program, and if it returns "Red", she picks "Yellow". If it returns "Yellow", she picks "Blue". If it returns "Blue", she picks "Red". She can base her choice on anything, so if there is a computer program \alpha that she can run, then she can base her choice on whatever it returns.

The only way for it to be possible that \alpha(s_0) always gives the right answer for Alice is if it takes so long to run that Alice gives up and makes her choice before the program finishes.

This is actually a fairly standard argument that even if the universe is deterministic, if you tried to construct a computer program that is guaranteed to correctly predict the future, the future would typically arrive before the computer program finished its calculations. No matter how clever the algorithm, no matter how fast the processor, there is no way to guarantee that the prediction algorithm would be faster than just waiting to see what happens.
 
  • Like
Likes MrRobotoToo
  • #58
A. Neumaier said:
1. True.

2. But this has nothing to do with the present thread, which is, according to the OP, neither about local realism nor about which things can be proved with or without superdeterminism.

1. A victory! :smile:2. From the OP: "superdeterminism (i.e. the experimentators are not free to choose the measurement parameters) allows the formulation of a local realistic quantum theory." [PS that's the diametric opposite of what you said.]It is also true that in a deterministic universe, experimenters are not free to choose the measurement parameters. But Zeilinger (also mentioned in the OP) and others do not object to that point, as a Bohmian universe is deterministic (too) and is generally accepted as a viable interpretation of QM. The key element of a superdeterministic theory in particular is that every experimental sample is biased exactly such as to support the incorrect predictions of QM as compared to the actual local realistic laws of physics.

And further: there is no actual theory of that either because if there were, physicists would be tearing it to shreds. :biggrin:
 
Last edited:
  • #59
A. Neumaier said:
In a deterministic universe, this assumption is obviously violated. For it means that these values are independent of the values of all observables prior to the moment the settings are made.

In a deterministic universe, it's still the case that the initial state of the universe is undetermined. Each of us can only have knowledge of a part of the universe--that part in our backwards lightcone. We have no knowledge about other parts of the universe unless we wait long enough for information from those sections of the universe to come into our backwards lightcone.

So even in a deterministic universe, the choices made by Alice and Bob at some future time may not be determined by the part of the universe available now. So effectively, they are undetermined.
 
  • #60
stevendaryl said:
I feel like I'm arguing a two-front war here. On the one hand, I don't think that it's impossible to have a superdeterministic explanation for QM statistics. On the other hand, I think that such a theory would be very bizarre, and nothing like any theory we've seen so far.

I feel your pain, and recognize my own participation in it. :smile:

Yes, the theory would be bizarre, I get your point about that. But how bizarre? That is *my* point - there really is no end to the bizarre nature. As soon as you explain one thing, I would come up with another counterexample, and yet a more bizarre version will need to be put forth. We can do that all day (actually let's not and say we did).

And I think you say it nicely when you comment about the prediction algorithm and time to execute. When I considered what it would take to support a superdeterministic local realistic theory, I inevitably conclude a near-infinite amount of information would need to be stored in every particle in order to provide the local realistic instruction set for Bell test outcomes. I think you would ultimately conclude the same, and see the inevitable circular reasoning required to support that position. How would you ever be able to draw a line between which particles are to be included in some future Bell test, and which aren't? Where is the instruction set residing for photons that don't even exist now? Or observers that don't exist now (presumably it is hidden in our DNA somewhere) ? Or measurement apparati that don't exist now? Etc.
 
  • #61
DrChinese said:
allows the formulation of a local realistic quantum theory."
Oops! I had missed this. Superdeterminism in the form proposed by demystifier in post #4 indeed allows anything because in a deterministic and reversible dynamics one can start at the wanted result and work backwards. So it allows also very irrelevant possibilites.

Demystifier said:
In such ‘superdeterministic’ theories the apparent free will of experimenters, and any other
apparent randomness, would be illusory.
This is already illusory in a deterministic universe. Thus Bell argues for a nondeterministic universe.
 
  • #62
stevendaryl said:
In a deterministic universe, it's still the case that the initial state of the universe is undetermined.
No. Only unknown to us. There is a big difference between undetermined and unknown.

A machine operates according to its state no matter how much we know of it. The same is the case for the universe, if it is machine-like as laplace painted it.
 
  • #63
A. Neumaier said:
No. Only unknown to us. There is a big difference between undetermined and unknown.

But for a local hidden variables theory of the type Bell considered, the parts of the universe relevant to Alice's and Bob's future choices have to be known at the time the twin pair was produced. That's impossible, in general. The parts of the universe that might affect Alice's choice were outside of the lightcone at the time the twin pair was produced.
 
  • #64
A. Neumaier said:
Oops! I had missed this.

All good. :smile:
 
  • #65
A. Neumaier said:
This is already illusory in a deterministic universe. Thus Bell argues for a nondeterministic universe.

No, it doesn't need to be nondeterministic, but there must be enough freedom in the initial conditions that knowing only part of the universe gives you no information about the rest of the universe.
 
  • #66
DrChinese said:
Further, this entire line of reasoning is moot anyway as there are other (non-Bell) tests in which there is no statistical component and/or there is no observer choice. With GHZ, for example, a single example will produce results in contradiction to local realism. Superdeterminism does NOT rescue that.

Now, THAT's a good point. I'll have to think about it.
 
  • Like
Likes Demystifier
  • #67
stevendaryl said:
the parts of the universe relevant to Alice's and Bob's future choices have to be known at the time the twin pair was produced.
No. it is enough to know that Alice and Bob have no choice at all to make all arguments logically invalid. Plausibility is a different matter, but what is plausible is always arbitrarily arguable.
stevendaryl said:
No, it doesn't need to be nondeterministic, but there must be enough freedom in the initial conditions that knowing only part of the universe gives you no information about the rest of the universe.
Not knowing what the choice is is still quite different from being able to make an arbitrary choice.
 
  • #68
A. Neumaier said:
No. it is enough to know that Alice and Bob have no choice at all to make all arguments logically invalid.

There is nothing per se relevant in and of itself that Alice and Bob have no setting choices at all when running a Bell test... if in fact their sample is representative. Why would it be?

The entire point is that the test outcomes would NOT constitute a fair sample, so that the Bell inequality is violated because of that. That is a variation on the usual fair sampling loophole. Since Alice and Bob have no choice at all, they are forced to pick a sample that the "universe" knows is not representative.
 
  • #69
stevendaryl said:
Now, THAT's a good point. I'll have to think about it.

Think about this one as well. So if the "true" match rate is 33% when the QM predicted (and observed) value is 25%, AND this is due to superdeterminism controlling the choice of Alice and Bob's measurement settings: you don't need time-varying/fast-switching as part of your test. You make no choice other than to have the angle at 120 degrees (or whatever) and leave the entire test running at that. No change. Ever. After all, the superdeterminism hypothesis is control over the measurement settings so that the "correct" (and misleading) sub-sample is picked, not some (light speed) signal from Alice to Bob. [Which is what fast-switching is intended to protect against.]

Now, at first the results show 25% but that is just the first part of the test stream. If the full universe is really closer to 33% [the Bell limit], presumably eventually - with a larger and larger sample - you should see a result greater than 33% to offset. Or at least something greater than 25% sometime. So you leave the test running for months, you are seeing... exactly what? Nothing but an increasingly long and sustained sample that is getting further and further away from the mean.

And yet: choice doesn't much enter into things, does it? There are no choices occurring. So now we need to amend our hypothesis to say: Alice and Bob don't get to control WHEN they START and END the test either. Oh, and by the way, samples that deviate FAR FAR FAR from the norm apparently come along in very large streams. Could be months-long even and many trillions of events! All the while, the test is running and no choices are being made. (And please, don't bother to think that no choice is also a choice. In this design, everything is static.)

Having a proper theory of superdeterminism to discuss would allow experimentalists to tear this apart. You may as well just hypothesize that the almighty intervenes in all tests of QM and alters outcomes to make it appear that Bell was right. Of course, you still would need to give up local realism since that almighty deity is non-local.

:biggrin: :biggrin: :biggrin:
 
  • #70
IMO a lot of confusion around this issue revolves around the idea of a single observer for a single experiment. In reality, the preparation of a system (e.g. in an energy-momentum eigenstate) requires a different "observer" to any subsequent observation (e.g. by a space-time located detector).

Determinism/causality in the sense of related time-ordered observations requires observers to bring their time co-ordinates to each observation. Since that time co-ordinate is observer-dependent there is the implication that there may be an underlying intrinsic reality that is time-independent. For example, in the intrinsic context of a photon, time stands still. So it is reasonable to suppose that an observer's time-dependent indeterminism might be built upon a time-independent and determinate intrinsic reality. This would be true, for instance, if an intrinsic energy-momentum eigenstate were viewed in (transformed to) an observer's space-time detection frame where it would become a superposition.

Note that although a change in basis and a frame transformation are usually thought of as two quite distinct ideas (a space-time frame transformation, for instance, does not involve a change in base observables) both are represented in Hilbert space by a unitary operator. It is therefore mathematically trivial to generalize the idea of a frame transformation to an arbitrary unitary operator that includes a possible change in basis.
 
Last edited:
  • #71
DrChinese said:
Think about this one as well. So if the "true" match rate is 33% when the QM predicted (and observed) value is 25%, AND this is due to superdeterminism controlling the choice of Alice and Bob's measurement settings: you don't need time-varying/fast-switching as part of your test. You make no choice other than to have the angle at 120 degrees (or whatever) and leave the entire test running at that. No change. Ever. After all, the superdeterminism hypothesis is control over the measurement settings so that the "correct" (and misleading) sub-sample is picked, not some (light speed) signal from Alice to Bob. [Which is what fast-switching is intended to protect against.]

I wouldn't say that that's all that it is protecting against.

A very general hidden-variable expression for the joint probability that Alice gets result A and Bob gets result B given that Alice's setting is \alpha and Bob's setting is \beta is:

P(A, B|\alpha, \beta) = \sum_\lambda P(\lambda | \alpha, \beta) P_A(A|\alpha, \beta, \lambda) P_B(B|\alpha, \beta, \lambda)

No superdeterminism implies that

P(\lambda | \alpha, \beta) = P(\lambda)

Leaving it running for hours on end doesn't insure that.

No FTL signalling and no superdeterminism implies that

P_A(A|\alpha, \beta, \lambda) = P_A(A|\alpha, \lambda)

P_B(B|\alpha, \beta, \lambda) = P_B(B|\beta, \lambda)
 
  • #72
stevendaryl said:
I wouldn't say that that's all that it is protecting against.

Fast switching protects us from needing to consider new, currently unknown local effects outside of QM, that might alter outcomes. There aren't any such currently in play (posited), but we know those aren't a factor already. How could they be? Nothing changes when you have fast switching and when you don't. So obviously that is not a factor either way. That should end the discussion of the need for fast switching except if you are attempting a full-on loophole free Bell test - something that we aren't discussing here (and which is material for a different thread).

Just saying that you don't need fast switching for a Bell test. There are a thousand other things we could attempt to rule out as a factor in any experiment as well, but that we don't. Example: we don't run tests on Mondays and Thursdays to prove that the day of the week does not affect experimental results either.
 
  • #73
mikeyork said:
in the intrinsic context of a photon, time stands still

This is not correct; a correct statement would be that in "the intrinsic context of a photon", the concept of "elapsed time" is not well-defined.
 
  • #74
stevendaryl said:
I feel like I'm arguing a two-front war here. On the one hand, I don't think that it's impossible to have a superdeterministic explanation for QM statistics. On the other hand, I think that such a theory would be very bizarre, and nothing like any theory we've seen so far.

Let me go through a stylized description of an EPR-like experiment so that we can see where the superdeterminism loophole comes in.

We have a game with three players, Alice, Bob and Charlie. Alice and Bob are in different rooms, and can't communicate. In each room, there are three light bulbs colored Red, Yellow and Blue, which can be turned off or on.

The game consists of many many rounds, where each round has the following steps:
  1. Initially, all the lights are off.
  2. Charlie creates a pair of messages, one to be sent to Alice and one to be sent to Bob.
  3. After Charlie creates his messages, but before they arrive, Alice and Bob each choose a color, Red, Yellow or Blue. They can use whatever criterion they like for choosing their respective colors.
  4. When Alice receives her message, she follows the instructions to decide whether to turn on her chosen light, or not. Bob similarly follows his instructions.
After playing the game for many, many rounds, the statistics are:
  • When Alice and Bob choose the same color, they always do the opposite: If Alice's light is turned on, Bob's is turned off, and vice-versa.
  • When Alice and Bob choose different colors, they do the same thing 3/4 of the time, and do the opposite thing 1/4 of the time.
The question is: What instructions could Charlie have given to Alice and Bob to achieve these results? The answer, proved by Bell's theorem, is that there is no way to guarantee those results, regardless of how clever Charlie is, provided that
  1. Charlie doesn't know ahead of time what colors Alice and Bob will choose.
  2. Alice has no way of knowing what's going on in Bob's room, and vice-versa.
The superdeterministic loophole

If Charlie does know what choices Alice and Bob will make, then it's easy for him to achieve the desired statistics:
  • Every round, he randomly (50/50 chance) sends either the message to Alice: "turn your light on", or "turn your light off"
  • If Alice and Bob are predestined to choose the same color, then Charlie sends the opposite message to Bob.
  • If Alice and Bob are predestined to choose different colors, then Charlie will send Bob the same message 3/4 of the time, and the opposite message 1/4 of the time.
Why the superdeterministic loophole is implausible

The reason that the superdeterministic loophole is not possible is because Alice and Bob can choose any mechanism they like to help them decide what color to use. Alice might look up at the sky, and choose the color based on how many shooting stars she sees. Bob might listen to the radio and make his decision based on the scores of the soccer game. For Charlie to be able to predict what Alice and Bob will choose can potentially involving everything that can possibly happen to Alice and Bob during the course of a round of the game. The amount of information that Charlie would have to take into account would be truly astronomical. The processing power would be comparable to the power required to accurately simulate the entire universe.

Why I think the superdeterministic loophole is actually impossible

What the superdeterministic loophole amounts to is that somehow Charlie has information about the initial state (before the game began) of the universe, s_0, and somehow he has a pair of algorithms, \alpha(s_0) and \beta(s_0) that predict the choice of Alice and Bob as a function of the initial state. The problem is that even if there were such algorithms, they computational time for computing the result would be greater than just waiting to see what Alice and Bob choose. So Charlie couldn't possibly know the results in time to choose his instructions to take those results into account.

Why not? Remember, we're allowing Alice and Bob to use whatever mechanism they like to decide what color to pick. So suppose Alice picks the same algorithm, \alpha, and chooses whatever color is NOT returned by \alpha(s_0)? In other words, she runs the program, and if it returns "Red", she picks "Yellow". If it returns "Yellow", she picks "Blue". If it returns "Blue", she picks "Red". She can base her choice on anything, so if there is a computer program \alpha that she can run, then she can base her choice on whatever it returns.

The only way for it to be possible that \alpha(s_0) always gives the right answer for Alice is if it takes so long to run that Alice gives up and makes her choice before the program finishes.

This is actually a fairly standard argument that even if the universe is deterministic, if you tried to construct a computer program that is guaranteed to correctly predict the future, the future would typically arrive before the computer program finished its calculations. No matter how clever the algorithm, no matter how fast the processor, there is no way to guarantee that the prediction algorithm would be faster than just waiting to see what happens.

Your line of reasoning is based on a completely wrong picture of how physics is supposed to work. In physics objects behave in the way they behave because there is something acting on them (a force for example). Objects like planets or particles do not make computations and decide how to move in order to achieve some "purpose". Such a weird, anthropocentric view leads nowhere. One can easily make similar arguments why for example general relativity is almost impossible.

We observe that stars correlate their motion and form spiral galaxies. Do you think that a star actually performs computations using the position/momenta of all other masses in the galaxy and "decide" how to move so that a spiral shape is maintained?

The correct picture is this: objects (stars, or particles) move as a result of the force acting on them. That force is a function of the magnitude of the fields present at that location. The magnitude of the fields is determined by the position/momenta of all field sources. If you deal with infinite range fields, like gravity and electromagnetism it follows that the motion of each object is a function of position/momenta of all objects that qualify as field sources.

So, from a pure mathematical point of view no field theory of infinite range allows for the objects described by the theory to evolve independently.

Now, there is a lot of confusion regarding superdeterminism so I think it is better to avoid this word and define others as folows:

I consider any deterministic theory to be of type D. Newtonian gravity, general relativity, classical electromagnetism, Newtonian mechanics of the rigid body, Bohmian mechanics are all D type theories.

I consider a deterministic theory to be of the type D+ if this theory does not allow the detector settings and the hidden variable to be independent variables. That will include Newtonian gravity, general relativity, classical electromagnetism and Bohmian mechanics. Newtonian mechanics is NOT of this type as one can move an object around without any effect on the other objects.

It is easy to see that a description of a Bell test in terms of charged particles (electrons and quarks) moving around is indeed of the type D+. If you want for example to calculate the motion of the particles that are involved in the emission of the entangled photons (so that you can determine the spins) you will need also the position/momenta of the particles that make up the detectors. They cannot be independent. So, classical electromagnetism is a D+ type theory.

Let's now define a new type of theory, say D++. This is a type D+ theory that gives the same predictions as QM. I do not claim that classical electromagnetism is of this type. It might be but I don't have enough evidence for that. Bohmian mechanics is a type D++ theory.

Now, I think it is best to treat superdeterminism in an analog way to non-locality. While non-local theories cannot be ruled out by Bell it doesn't mean they are true. Newtonian gravity is non-local so in an universe described by this theory the detector settings cannot be independent on the hidden variable. The statistical calculation used in Bell's theorem do not work in this case. But Newtonian gravity is still not a true description of the quantum world. So, I would say that is correct to call D+ theories superdeterministic, even if they are not true (D++ theories). Most of the debate here is centered on the fact that some call D+ theories superdeterministic whyle others require only D++ theories to be called that way.

Let me now approach the problem of what would take for a D+ theory to be also a D++ theory. As I have stressed in my first post, trying to come up with an simple explanation for the violation of Bell's inequality has little chance of success, even if you have the right theory. First, you will need a valid initial state (some states might evolve in the lab blowing out, etc). Then, based on that initial state you need to simulate the motion of at least the particles directly involved in the experiment (PDC source, detectors, Alice, Bob, etc) and see what the result will be. Just looking at some equations will not help you in the same way that just looking at the equations of general relativity doesn't make the spiral shape of a galaxy obvious. So, this type of arguments involving what Alice/Bob can and cannot do and how the particles send messages, etc are useless.

The only way to ascertain if a D+ type theory is a D++ type also is to see if that theory gives QM in some limit. Then one can use QM to calculate predictions for experiments.

Andrei
 
  • #75
ueit said:
D++. This is a type D+ theory that gives the same predictions as QM. I do not claim that classical electromagnetism is of this type. It might be but I don't have enough evidence for that.
It is not. One cannot model squeezed states of light obtained by parametric down-conversion in terms of classical electromagnetic fields.
 
  • #76
Demystifier said:
Here is an exact quote of Bell (the bolding is mine):

"An essential element in the reasoning here is that a and b are free
variables. One can envisage then theories in which there just are no free
variables for the polarizer angles to be coupled to. In such ‘superdeter-
ministic’
theories the apparent free will of experimenters, and any other
apparent randomness, would be illusory. Perhaps such a theory could be
both locally causal and in agreement with quantum mechanical predic-
tions. However I do not expect to see a serious theory of this kind. I
would expect a serious theory to permit ‘deterministic chaos’ or

‘pseudorandomness’, for complicated subsystems (e.g. computers)
which would provide variables sufficiently free for the purpose at hand.

But I do not have a theorem about that."

It seems to me that Bell did understand superdeterministic theories to be the D+ type as defined by me above. He clearly states that not all superdeterministic theories need to reproduce QM, by saying "such a theory could be both locally causal and in agreement with quantum mechanical predictions".

Indeed, in classical electromagnetism the apparent free will of experimenters, and any other
apparent randomness" is "illlusory", therefore it qualifies as a superdeterministic theory.

't Hooft discusses the superdeterminism and the "conspiracy" arguments in this article:

The Fate of the Quantum
https://arxiv.org/pdf/1308.1007.pdf

He also defines a requirement for a superdeterministic theory to be non-conspiratorial: correlations should be present regardless of the initial state. So he replaces the "free will" of Bell with the free choice of the initial state. I hope you find this approach acceptable.

Andrei
 
  • #77
A. Neumaier said:
It is not. One cannot model squeezed states of light obtained by parametric down-conversion in terms of classical electromagnetic fields.

Well, I am not so sure about that. There is a theory, called stochastic electrodynamics (SED) that claims to obtain QM formalism (including a classical derivation of Plank's constant from classical electromagnetism and the assumption that there exist a zero-point field of a certain type. If their derivation is correct then every prediction of QM can also be explained in a classical way. See for example this article:

Stochastic electrodynamics as a foundation for quantum mechanics
Physics Letters A - Volume 56, Issue 4, 5 April 1976, Pages 253-254

http://www.sciencedirect.com/science/article/pii/0375960176902978

Andrei
 
  • #78
ueit said:
Well, I am not so sure about that. There is a theory, called stochastic electrodynamics (SED)
This approach has limitations. it recovers many effects, but only those of states of light that have a positive Wigner function. See

The Nature of Light: What Is a Photon?
Optics and Photonics News, October 2003
http://www.osa-opn.org/Content/ViewFile.aspx?Id=3185
 
  • #79
ueit said:
Your line of reasoning is based on a completely wrong picture of how physics is supposed to work.

I was talking specifically about a stylized version of the EPR experiment, to show the role of superdeterminism as a loophole. I was not discussing how physics is supposed to work.

In physics objects behave in the way they behave because there is something acting on them (a force for example). Objects like planets or particles do not make computations and decide how to move in order to achieve some "purpose". Such a weird, anthropocentric view leads nowhere. One can easily make similar arguments why for example general relativity is almost impossible.

The point of using anthropomorphic language was because it makes the implausibility of superdeterminism clearer. In the EPR experiment, Alice can decide, ahead of time, to base her choice of which setting to use on absolutely anything--whether she sees a shooting star, the scores of the game on the radio, etc. She can make her decision as "anthropocentric" as she likes. In order for the superdeterministic loophole to make sense, the hidden mechanism has to anticipate her choice. So potentially it has to predict the future of the universe with unerring accuracy.

And, no, it is nothing like GR. Determinism and superdeterminism are not the same things. That's just a misconception on your part.
 
  • #80
stevendaryl said:
So potentially it has to predict the future of the universe with unerring accuracy.
Any deterministic model predicts the future of the universe modeled by it with unerring accuracy. So your conclusion provides no information beyond what is already in determinism.
 
  • #81
ueit said:
So, from a pure mathematical point of view no field theory of infinite range allows for the objects described by the theory to evolve independently.

Yes, if we use deterministic field theory, then the whole universe evolves together. Show me how that leads to the quantum predictions for EPR. Actually, don't. Write a paper deriving the quantum predictions from a classical field theory. Then we can discuss it here. As it is, you're talking about a nonexistent, let alone mainstream theory.
 
  • #82
A. Neumaier said:
Any deterministic model predicts the future of the universe modeled by it with unerring accuracy. So your conclusion provides no information beyond what is already in determinism.

No, that's false. Even if the universe were completely deterministic, it would not be possible to make unerring, detailed predictions about the future evolution of the universe, because the computational power required would require the whole universe. I already went through this.
 
  • #83
stevendaryl said:
Yes, if we use deterministic field theory, then the whole universe evolves together.
And if we use deterministic ##N##-particle theory, then the same holds. Nothing in the argument depends on fields.

stevendaryl said:
As it is, you're talking about a nonexistent, let alone mainstream theory.
So was Bell, in his theorem. He proved that certain theories (allowing free choice which is impossible in a deterministic theory) don't exist, no more. And when he proved his theorem it was not mainstream.
 
  • #84
stevendaryl said:
No, that's false. Even if the universe were completely deterministic, it would not be possible to make unerring, detailed predictions about the future evolution of the universe, because the computational power required would require the whole universe. I already went through this.
Computational power is completely irrelevant for a mathematical or physical theory.

We cannot compute many things in quantum field theory due to lack of power, but still believe it correctly models at least systems of the size of the sun. Although we will never be able to do it because we never know the exact initial state of the sun.
 
  • #85
stevendaryl said:
No, that's false. Even if the universe were completely deterministic, it would not be possible to make unerring, detailed predictions about the future evolution of the universe, because the computational power required would require the whole universe. I already went through this.

Once again, suppose that Alice has a device, a computer equipped with a detailed description of the initial state of the universe, that was sufficient to predict the future with perfect accuracy and precision. So just to be perverse, she asks the computer program whether or not she will turn on a specific light switch at exactly 12:00. If the computer returns an answer before 12:00, then she does the opposite of whatever it predicts.

The conclusion is that one of the following would be true:
  1. the computer gives the wrong answer, or
  2. it will take the computer longer than 12:00 to come up with any answer at all
So under the assumption that it is possible for people to be as perverse as Alice, accurately predicting the future in a timely manner is impossible. You can convert this into a theorem about computer science: It is not possible to have a universal prediction program that predicts the future behavior of every program.
 
  • #86
A. Neumaier said:
So was Bell, in his theorem. He proved that certain theories (allowing free choice which is impossible in a deterministic theory) don't exist, no more. And when he proved his theorem it was not mainstream.

But Physics Forums is not the place to advance new results. Publish elsewhere, and we can discuss it here.
 
  • #87
stevendaryl said:
But Physics Forums is not the place to advance new results. Publish elsewhere, and we can discuss it here.
I have no intention to produce new results on this topic. I am only applying my logic to the statements offered in this thread.
 
  • #88
  • #89
A. Neumaier said:
I have no intention to produce new results on this topic. I am only applying my logic to the statements offered in this thread.

Well, the claim that superdeterminism can reproduce the quantum predictions for EPR is a huge, non-mainstream claim. I suppose that rather than directly making such a claim, you can do a double flip and argue that the arguments against the claim are inadequate. I still think it should be a published paper.
 
  • #90
A. Neumaier said:
There is a precise mathematical version of this: There is no algorithm that can tell whether an arbitrary given program stops for an arbitrary given input. But this theorem has not the slightest physical implications, since the universe is neither an algorithm nor a Turing machine.

It's completely false that it has no physical implications. The same argument shows the impossibility of the kind of superdeterminism required to make EPR-type predictions using a deterministic theory.
 
  • #91
stevendaryl said:
I was talking specifically about a stylized version of the EPR experiment, to show the role of superdeterminism as a loophole. I was not discussing how physics is supposed to work.
The point of using anthropomorphic language was because it makes the implausibility of superdeterminism clearer. In the EPR experiment, Alice can decide, ahead of time, to base her choice of which setting to use on absolutely anything--whether she sees a shooting star, the scores of the game on the radio, etc. She can make her decision as "anthropocentric" as she likes. In order for the superdeterministic loophole to make sense, the hidden mechanism has to anticipate her choice. So potentially it has to predict the future of the universe with unerring accuracy.

And, no, it is nothing like GR. Determinism and superdeterminism are not the same things. That's just a misconception on your part.

The same argument making "the implausibility of superdeterminism clearer" can be used to make any physical theory implausible, just in my example with GR. The Earth anticipates where the sun will be 8 minutes from now and accelerates toward that particular place (not where the Sun is seen with the eyes). The Sun needs to calculate where all the stars in the galaxy will be thousands of years from now to move so that it remains in the spiral arm, etc.

It is not the case that Alice makes a choice about how to set the detector and the source somehow anticipates her choice. The situation is like this:

The source of entangled particles is a quark/electron subsistem (S1)
Alice, her detector and whatever she decides do use to help her with the decision is another quark/electron subsistem (S2)
Bob, his detector and whatever he decides do use to help him with the decision is another quark/electron subsistem (S3)

S1, S2 and S3 form the whole experimental system, S.

As I have argued before, S1, S2 and S3 cannot be independent. In order to describe the evolution of S1, S2 and S3 you need the resultant electric/magnetic fields originating from the whole system, S. So, S1, S2 and S3 all evolve as a function of S. Given this situation, correlations are bound to appear between the motion of the subatomic particles of S1, S2 and S3. Sometimes those correlations could become visible at macroscopic level, and this is the fundamental cause for the observed correlations.

Now, why those exact correlations and not other? I don't know. As I have said one needs to perform a simulation of S and see what the result is. If the result is correct, the theory might be right. But even in this case you will not get a simple explanation in terms of an oversimplified macroscopic description.

Andrei
 
  • #92
A. Neumaier said:
There is a precise mathematical version of this: There is no algorithm that can tell whether an arbitrary given program stops for an arbitrary given input. But this theorem has not the slightest physical implications, since the universe is neither an algorithm nor a Turing machine.

Actually, what I'm talking about is not the same theorem. It is the theorem that it is impossible, in general to predict the future state of a program in a time less than the time required to just run the program. There is a universal program that can predict future states of other programs, but not in a timely manner. In contrast, there is no program that can solve the halting problem.
 
  • #93
stevendaryl said:
the claim that superdeterminism can reproduce the quantum predictions for EPR
I never made that claim.
stevendaryl said:
The same argument shows the impossibility of the kind of superdeterminism required to make EPR-type predictions using a deterministic theory.
It is based on the assumption that the dynamics is effectively computable. This is a ridiculous assumption. Almost no deterministic dynamics is computable, and it need not be. Certainly Newton's theory of gravity is not computable.
 
  • #94
ueit said:
The same argument making "the implausibility of superdeterminism clearer" can be used to make any physical theory implausible, just in my example with GR. The Earth anticipates where the sun will be 8 minutes from now and accelerates toward that particular place (not where the Sun is seen with the eyes). The Sun needs to calculate where all the stars in the galaxy will be thousands of years from now to move so that it remains in the spiral arm, etc.

That shows exactly the difference between a deterministic theory and a superdeterministic theory. GR is deterministic, but not superdeterministic.

If instead of the sun and the Earth, you have a rock that is orbiting a massive spaceship at a distance of 8 light-minutes, far from any other gravitational sources, and that spaceship can maneuver using rockets, then it absolutely will not be the case that the acceleration of the rock will be toward where the rocket will be 8 minutes from now (or whatever the claim was). If the spaceship uses rockets to change locations suddenly, the behavior of the rock will continue as if the spaceship were still where it was until the information about its new location and velocity has time to reach the rock.

Thanks.
 
  • #95
A. Neumaier said:
I never made that claim.

Well, that's what this discussion is about.
 
  • #96
A. Neumaier said:
It is based on the assumption that the dynamics is effectively computable. This is a ridiculous assumption.

That's why the superdeterministic loophole can't actually work.
 
  • #97
stevendaryl said:
Yes, if we use deterministic field theory, then the whole universe evolves together. Show me how that leads to the quantum predictions for EPR. Actually, don't. Write a paper deriving the quantum predictions from a classical field theory. Then we can discuss it here. As it is, you're talking about a nonexistent, let alone mainstream theory.

I have already done that in a previous post:

Stochastic electrodynamics as a foundation for quantum mechanics
Physics Letters A - Volume 56, Issue 4, 5 April 1976, Pages 253-254


http://www.sciencedirect.com/science/article/pii/0375960176902978

The most up-to date version of the theory is published in a book. You can find it free here:

The Emerging Quantum
https://loloattractor.files.wordpre..._marc3ada_cetto_andrea_valdc3a9bookzz-org.pdf

It seems to me that while you are continuously asking me to present papers and so on you don't take seriously your burden of proof. For example you make the claim that in order to get EPR results in a deterministic theory you need the ridiculous mechanism with objects anticipating what other objects will do. I have seen no rigorous argument about that. None of the scientists working on superdeterministic theories, like 't Hooft and the autors of the book above have used such a model.

Andrei
 
  • #98
stevendaryl said:
That shows exactly the difference between a deterministic theory and a superdeterministic theory. GR is deterministic, but not superdeterministic.

If instead of the sun and the Earth, you have a rock that is orbiting a massive spaceship at a distance of 8 light-minutes, far from any other gravitational sources, and that spaceship can maneuver using rockets, then it absolutely will not be the case that the acceleration of the rock will be toward where the rocket will be 8 minutes from now (or whatever the claim was). If the spaceship uses rockets to change locations suddenly, the behavior of the rock will continue as if the spaceship were still where it was until the information about its new location and velocity has time to reach the rock.

Thanks.

Your example is irrelevant because it falls outside the scope of GR. The rockets are not systems described by GR.

This situation is completely different from the description of EPR in terms of subatomic particles because there is nothing there that is not inside the scope of QM (and of the candidate hidden variable theory).
 
  • #99
ueit said:
Your example is irrelevant because it falls outside the scope of GR. The rockets are not systems described by GR.

It illustrates why GR is not superdeterministic, just deterministic.
 
  • #100
ueit said:
This situation is completely different from the description of EPR in terms of subatomic particles because there is nothing there that is not inside the scope of QM (and of the candidate hidden variable theory).

But the same conclusion holds. It doesn't matter what forces describe subatomic particles. As long as behavior is complex enough to do things like computations, it is not predictable in enough detail to allow a superdeterministic explanation of EPR statistics.
 
Back
Top