OK Corral: Local versus non-local QM

  • Thread starter wm
  • Start date
  • Tags
    Local Qm
In summary, The conversation discusses the issue of local versus non-local interpretations of quantum mechanics, specifically in relation to the EPRB experiment with spin-half particles. The participants hope to resolve the issue using mathematics. The concept of Many-Worlds Interpretation (MWI) is introduced and explained as a way to understand the distribution of information in the universe and how it relates to Alice's and Bob's worlds.
  • #176
enotstrebor said:
It has always been assumed that the "entangled'' photon is no different EM-wise than any other photon.

If one had a physical model of the photon one might see that there are actually two types of linear polarized photons. Regular and "entangled". The entangled photon has a electric vector which at maximum is twice (mag. 2) that of the normal photon (mag 1). Thus in fact rather than the correlation being <=2 (1+1) the result actually can be <= 4 (2+2).

etc.

OK, let's test your hypothesis against Bell's Theorem. For the 8 cases
below, please give your expectation probabilities for a specific a, b and c:

1. a+ b+ c+ : ?
2. a+ b+ c- : ?
3. a+ b- c+ : ?
4. a+ b- c- : ?
5. a- b+ c+ : ?
6. a- b+ c- : ?
7. a- b- c+ : ?
8. a- b- c- : ?

If they add to 100% and none are less than 0%, then your hypothesis is realistic (as this is the precise definition of realism). Try settings a=0, b=67.5, c=45 for your entangled photons.

You see, a model does not become realistic simply because you say it is. It must meet a very specific condition, one that Bell found was too difficult for local realistic theories to achieve.

Unless you are willing to share your predictions, I don't think we will be able to evaluate your idea.
 
Physics news on Phys.org
  • #177
It has always been assumed that the "entangled'' photon is no different EM-wise than any other photon.

If one had a physical model of the photon one might see that there are actually two types of linear polarized photons. Regular and "entangled". The entangled photon has a electric vector which at maximum is twice (mag. 2) that of the normal photon (mag 1). Thus in fact rather than the correlation being <=2 (1+1) the result actually can be <= 4 (2+2).

Robert, can you point me to any reference where this is explicated, I'm very interested to find out more.

Interesting thread. Has anyone read this ?
 

Attachments

  • Clifford alg values and Bell quant-ph. 0703179.pdf
    131.3 KB · Views: 195
Last edited:
  • #178
Mentz114 said:
Robert, can you point me to any reference where this is explicated, I'm very interested to find out more.

Interesting thread. Has anyone read this ?

Attached Files: Clifford alg values and Bell quant-ph. 0703179.pdf (131.3 KB, 1 views)

Please. The attached files, are they accessible by clicking, or by how?
 
  • #179
Yes. I can click on that link and download from arXiv.

It does not work for you, go to the xxx.lanl.gov and download it.
 
  • #180
Mentz114 said:
Yes. I can click on that link and download from arXiv.

It does not work for you, go to the xxx.lanl.gov and download it.

Is this Bell's theorem refuted? Expert comment please?

From abstract quant-ph/0703179

Disproof of Bell’s Theorem by Clifford Algebra Valued Local Variables

Joy Christian, Perimeter Institute, 31 Caroline Street North, Waterloo, Ontario N2L 2Y5, Canada, and Department of Physics, University of Oxford, Parks Road, Oxford OX1 3PU, England

It is shown that Bell’s theorem fails for the Clifford algebra valued local realistic variables. This is made evident by exactly reproducing quantum mechanical expectation value for the EPR-Bohm type spin correlations observable by means of a local, deterministic, Clifford algebra valued variable, without necessitating either remote contextuality or backward causation. Since Clifford product of multivector variables is non-commutative in general, the spin correlations derived within our locally causal model violate the CHSH inequality just as strongly as their quantum mechanical counterparts.
 
  • #181
QuantunEnigma said:
Is this Bell's theorem refuted? Expert comment please?

From abstract quant-ph/0703179

Disproof of Bell’s Theorem by Clifford Algebra Valued Local Variables
I don't know anything about Clifford Algebra so I can't follow it myself, but I came across a short critical response here:

http://www.arxiv.org/abs/quant-ph/0703218

Christian also has a "reply to critics" here:

http://www.arxiv.org/abs/quant-ph/0703244
 
Last edited:
  • #182
QuantunEnigma said:
Is this Bell's theorem refuted? Expert comment please?

From abstract quant-ph/0703179

Disproof of Bell’s Theorem by Clifford Algebra Valued Local Variables

Joy Christian, Perimeter Institute, 31 Caroline Street North, Waterloo, Ontario N2L 2Y5, Canada, and Department of Physics, University of Oxford, Parks Road, Oxford OX1 3PU, England

It is shown that Bell’s theorem fails for the Clifford algebra valued local realistic variables. This is made evident by exactly reproducing quantum mechanical expectation value for the EPR-Bohm type spin correlations observable by means of a local, deterministic, Clifford algebra valued variable, without necessitating either remote contextuality or backward causation. Since Clifford product of multivector variables is non-commutative in general, the spin correlations derived within our locally causal model violate the CHSH inequality just as strongly as their quantum mechanical counterparts.

I have previously read this. In my opinion, it will never be accepted as being of substance sufficient to change minds about Bell's Theorem. It is highly technical, and I don't believe it addresses any of the elements Bell has laid out.

There are new "disproofs" of Bell being published in the archives every month or so. These are rejected regularly for publication in peer reviewed journals.

I have a simple test for any disproof - I have posted it here many times. So far, I have no takers. (All I ask is that someone provide their predictions for a pair of entangled particles when at observation settings a, b and c - where the outcomes are independent of which a/b/c are to be observed. The predictions need only add to 100% and none should be less than zero, which is of course the requirement of Bell Realism.)

Because most of these disproofs hinge on technical issues, they miss the entire point Bell made. Bell used the definition of reality associated with Einstein, that particle observables must have values independent of the act of observation. This is directly opposed to the Heisenberg Uncertainty Principle, as Einstein was acutely aware. Disproofs provide different definitions, so it becomes a "straw man" argument to tear it down.

This paper's argument attempts to show that a local realistic theory will also violate a Bell Inequality - something that presumably can only be accomplished by a non-local or non-realistic theory such as Quantum Mechanics. This is a flawed logic model, as the issue is to demonstrate that a local realistic theory can both match experiment AND meet (not violate) the standard set by a Bell Inequality. If it were capable of this, it could pass my little test. Note that Quantum Mechanics does NOT need to address my test, since it does not claim to be realistic (and local).

In summary: it is NOT true that a purported classical (local & realistic) theory which violates a Bell Inequality will render Bell's Theorem invalid. Therefore, Christian's paper ultimately fails. Bell provided a specific set of settings for a/b/c to consider for any local realistic theory, and I note that these were not addressed.
 
  • #183
DrChinese said:
This paper's argument attempts to show that a local realistic theory will also violate a Bell Inequality - something that presumably can only be accomplished by a non-local or non-realistic theory such as Quantum Mechanics. This is a flawed logic model, as the issue is to demonstrate that a local realistic theory can both match experiment AND meet (not violate) the standard set by a Bell Inequality. If it were capable of this, it could pass my little test. Note that Quantum Mechanics does NOT need to address my test, since it does not claim to be realistic (and local).

A local realistic theory, in order to predict the observed correlations, must be deterministic. If it is stochastic it needs non-locality.

Bell's theorem rejects from the start any deterministic theory because of its "free-choice" assumption.

What you are asking with your test is a logical impossibility. Any local theory that tries to "beat" Bell's theorem and does not deny the "free choice" assumption is doomed because it is logically contradictory (deterministic and non-deterministic in the same time).
 
  • #184
ueit said:
A local realistic theory, in order to predict the observed correlations, must be deterministic. If it is stochastic it needs non-locality.

Bell's theorem rejects from the start any deterministic theory because of its "free-choice" assumption.
It doesn't reject deterministic theories, it just rejects bizarre "conspiracy" theories where somehow the initial conditions of the universe determine both the state of the particles emitted by the source on a given trial and the brain state of the experimenter on the same trial in just the right way to give the required correlations. As long as you assume the brain state of the experimenter before making a choice on a given trial is statistically independent of the hidden states of the particle emitted by the source on the same trial, then Bell's theorem can rule out local realism, it doesn't matter whether the universe is fundamentally deterministic or not.

In any case, although I don't understand the details of what Christian is proposing in his Clifford Algebra paper, I didn't get the impression he was proposing this sort of "conspiracy" explanation.
 
Last edited:
  • #185
ueit said:
A local realistic theory, in order to predict the observed correlations, must be deterministic. If it is stochastic it needs non-locality.

Bell's theorem rejects from the start any deterministic theory because of its "free-choice" assumption.

What you are asking with your test is a logical impossibility. Any local theory that tries to "beat" Bell's theorem and does not deny the "free choice" assumption is doomed because it is logically contradictory (deterministic and non-deterministic in the same time).

You imply that Bell's requirements are too strict, and possibly unnecessary as well.

IF...*you* postulate a local realistic theory, THEN Bell applies. If you don't like the results, that is your problem, and I cannot help you. Bell does not apply to a non-realistic theory such as QM, nor does it apply to non-local theories such as BM.

I mean, it is not like Bell randomly came up with his theory just to confound you! :tongue:
 
  • #186
JesseM said:
It doesn't reject deterministic theories, it just rejects bizarre "conspiracy" theories where somehow the initial conditions of the universe determine both the state of the particles emitted by the source on a given trial and the brain state of the experimenter on the same trial in just the right way to give the required correlations. As long as you assume the brain state of the experimenter before making a choice on a given trial is statistically independent of the hidden states of the particle emitted by the source on the same trial, then Bell's theorem can rule out local realism, it doesn't matter whether the universe is fundamentally deterministic or not.

I think there are two types of deterministic theories:

1. theories that lack long-range forces (Newtonian billiard balls that interact only when collisions take place)

2. GR type theories, where each particle (in the case of GR-each massive body) interacts with every other particle.

I agree with you that "billiard balls" theories are pretty much rejected by Bell's theorem, not because they cannot possibly work, but because, in order for them to work, "bizarre conspiracies" must be postulated about the initial state of the universe.

However, the second type theories need not to posit such conspiracies. The assumption of statistical independence between distant parts of a system is highly questionable. A change in one part of the system requires a change of the whole. You cannot, say, move Mars on a different orbit while keeping the other bodies in place. Of course, with a more complex system, like two distant galaxies, it is not so obvious that a change in one is not possible without a corresponding adjustment of the second so one might be fooled to think that there are two statistically independent systems.

The bottom line is that local-deterministic theories are not ruled out by the experimental evidence but by the assumption of statistical independence used for the derivation of Bell's theorem.
 
  • #187
DrChinese said:
IF...*you* postulate a local realistic theory, THEN Bell applies.

Not if I deny statistical independence between the source and detectors.
 
  • #188
ueit said:
I think there are two types of deterministic theories:

1. theories that lack long-range forces (Newtonian billiard balls that interact only when collisions take place)

2. GR type theories, where each particle (in the case of GR-each massive body) interacts with every other particle.
But objects don't interact instantaneously--GR still has a light cone structure, so if you foliate spacetime into a stack of spacelike surfaces, everything going on in one region of space in a given surface should be determined by what was going on in the complete set of points in space in an earlier surface that lie in the later region's past light cone, and nothing outside that region of the earlier surface should have an effect on the chosen region of the later surface (There are weird spacetimes that apparently can't be foliated in this way, like ones containing closed timelike curves, but I think this is true as long as you assume a globally hyperbolic spacetime).

So, it seems to me the situation is no different with GR than with billiard balls--if the event of the experimenter choosing what measurement setting to use and the event of the source generating the two particles are each outside the other's future and past light cone (a spacelike separation), the only way they could fail to be statistically independent is if you assume that some event or events in their mutual past light cone determined these two events in just the right way to create the correlations--the "conspiracy" assumption.
ueit said:
However, the second type theories need not to posit such conspiracies. The assumption of statistical independence between distant parts of a system is highly questionable. A change in one part of the system requires a change of the whole. You cannot, say, move Mars on a different orbit while keeping the other bodies in place.
If you set off a bunch of nuclear bombs or something on Mars to shift its orbit, we wouldn't feel any gravitational effects of this event any sooner than we'd receive light waves from the event--gravitational waves travel at c just like electromagnetic waves.
ueit said:
The bottom line is that local-deterministic theories are not ruled out by the experimental evidence but by the assumption of statistical independence used for the derivation of Bell's theorem.
As I understand it, "local" means "having a light cone structure", and you can use the type of argument I made above to show that no local theory where each event has a single definite outcome (as opposed to a many-worlds type theory) can explain the violation of Bell inequalities without positing a "conspiracy" where events in the past light cone of both the source's particle emission and the experimenter's choosing of setting always causes them to be correlated in just the right way to give the observed results. If you don't posit such a conspiracy, how can you explain a correlation between two events with a spacelike separation?
 
  • #189
ueit said:
Not if I deny statistical independence between the source and detectors.

Again, you speak in generalities when you imply such a connection. The source is a polarized laser beam. The detectors are polarized as well. Exactly how do you propose that the results depend on the source? The usual formula, Cos^2(a-b), relates the results of the detector settings but lacks a term for the source setting. This formula has substantial experimental validation.

In my scientific opinion: unless you can predict the results of specific cases in advance or otherwise improve the accuracy of the usual formula by adding a term for a source setting, you may as well be asserting that the results are a function of the phase of the moon.

Or perhaps - gasp - it is a hidden variable. But haven't we been down that road before? Isn't that exactly what Bell started with? :tongue:
 
  • #190
JesseM said:
But objects don't interact instantaneously--GR still has a light cone structure, so if you foliate spacetime into a stack of spacelike surfaces, everything going on in one region of space in a given surface should be determined by what was going on in the complete set of points in space in an earlier surface that lie in the later region's past light cone, and nothing outside that region of the earlier surface should have an effect on the chosen region of the later surface (There are weird spacetimes that apparently can't be foliated in this way, like ones containing closed timelike curves, but I think this is true as long as you assume a globally hyperbolic spacetime).

I agree.

So, it seems to me the situation is no different with GR than with billiard balls--if the event of the experimenter choosing what measurement setting to use and the event of the source generating the two particles are each outside the other's future and past light cone (a spacelike separation), the only way they could fail to be statistically independent is if you assume that some event or events in their mutual past light cone determined these two events in just the right way to create the correlations--the "conspiracy" assumption. If you set off a bunch of nuclear bombs or something on Mars to shift its orbit, we wouldn't feel any gravitational effects of this event any sooner than we'd receive light waves from the event--gravitational waves travel at c just like electromagnetic waves.

There is a subtle error in your above line of reasoning. It is true that if we change Mars orbit by using a nuke, the effect will manifest on Earth at the same moment we see the explosion. However, in this case we do not deal with a deterministic theory anymore. While GR is deterministic, the nuclear explosion is not governed by GR and therefore, from GR's "point of view" it is a true unpredictable event. We have a mixture of a deterministic theory with random events and this is not what I propose as a local realistic explanation of EPR.

Now, let's make a correct analogy, by letting a stray planet, coming from a distant galaxy, to alter the orbit of Mars. In this case we deal with a true deterministic system and the effect will be felt instantaneously on Earth, as Newton's law of gravity (which is a good approximation for this case) predicts, before the light from Mars will reach us. This is because the stray planet does not interact only with Mars, but with Earth, and Jupiter, and all other bodies at once.

If we return to Bell's theorem we see that we are in the "stray planet" case and not in the "nuke" case. The mechanism behind the decision to move the detector on a different axis is entirely covered by QM (unless you do not propose a mind/body dualism) so it does not and cannot "inject" randomness into the quantum system in the way the nuke does for the gravitational system.

The difference between the "billiard ball" theory and GR is that in the former the particles are not aware of each other (a particle in a distant place has no effect on another) while in the later such awareness exists even beyond the light cone (if no non-deterministic events like nukes are allowed). For example, in the solar system Earth accelerates towards the future position of the Sun, and not towards its retarded position (the place we see the Sun).

As I understand it, "local" means "having a light cone structure", and you can use the type of argument I made above to show that no local theory where each event has a single definite outcome (as opposed to a many-worlds type theory) can explain the violation of Bell inequalities without positing a "conspiracy" where events in the past light cone of both the source's particle emission and the experimenter's choosing of setting always causes them to be correlated in just the right way to give the observed results. If you don't posit such a conspiracy, how can you explain a correlation between two events with a spacelike separation?

I think I've provided an explanation why your above argument does not apply to EPR. If QM is deterministic there is no source of randomness that can be used to "fool" the PDC about the future detector orientation. The light cone structure is not a problem because the information about the past is enough to perfectly predict the future.

A local, realistic, non-conspiracy type mechanism for EPR would be as follows:

1. every particle in the experimental setup sends a signal, at light speed towards the PDC source. (this is in fact true for every particle in the visible universe)

2. The calcium atom "reads" from those signals the position/momentum for each particle and "computes" their future evolution (including how the detector will be oriented at the time of detection). This might resemble the way Earth "reads" from the space curvature around it the position/momentum of other massive bodies and then "decides" how to accelerate.

3. When a suitable future detector orientation is detected (suitable in the sense that it must conform with Malus's law and conservation laws) a pair of "entangled" particles is emitted, "laughing" about the futile attempts of the experimenter to "beat the system".
 
  • #191
DrChinese said:
Again, you speak in generalities when you imply such a connection. The source is a polarized laser beam. The detectors are polarized as well. Exactly how do you propose that the results depend on the source? The usual formula, Cos^2(a-b), relates the results of the detector settings but lacks a term for the source setting. This formula has substantial experimental validation.

Please take a look at my above post to JesseM

In my scientific opinion: unless you can predict the results of specific cases in advance or otherwise improve the accuracy of the usual formula by adding a term for a source setting, you may as well be asserting that the results are a function of the phase of the moon.

I do not need to provide a physically plausible local-realistic mechanism for EPR, only a logically consistent one (without appealing to conspiracies as these are extremely non-parsimonious). That is enough to prove that your assertion regarding the applicability of Bell's theorem (in spite of what Bell himself said) is false.
 
  • #192
ueit said:
There is a subtle error in your above line of reasoning. It is true that if we change Mars orbit by using a nuke, the effect will manifest on Earth at the same moment we see the explosion. However, in this case we do not deal with a deterministic theory anymore. While GR is deterministic, the nuclear explosion is not governed by GR and therefore, from GR's "point of view" it is a true unpredictable event. We have a mixture of a deterministic theory with random events and this is not what I propose as a local realistic explanation of EPR.
I didn't assume the nuclear explosion was random, though. You are free to assume that whatever non-gravitational forces are involved are also governed by deterministic laws, like classical electronmagnetism, which can certainly be incorporated into GR.

The point is just that no matter what your complete set of fundamental laws are, as long as they have a light cone structure, then there should be no statistical correlation between events A and B with a spacelike separation unless there's some event or events in the past light cone of A and B which predetermines them in the right way to create the correlation. And if A is the event of the source emitting particles in a certain state, and B is the event of an experimenter's brain making a choice of which setting to use on a given trial, then explaining the violation of Bell inequalities in terms of such a predetermining event in A and B's past light cone amounts to the "conspiracy" assumption discussed earlier. Do you disagree with any of this? If so, what specifically do you disagree with?
ueit said:
Now, let's make a correct analogy, by letting a stray planet, coming from a distant galaxy, to alter the orbit of Mars. In this case we deal with a true deterministic system and the effect will be felt instantaneously on Earth, as Newton's law of gravity (which is a good approximation for this case) predicts, before the light from Mars will reach us. This is because the stray planet does not interact only with Mars, but with Earth, and Jupiter, and all other bodies at once.
Are you sure about that? I believe it is true that if a body is moving at a constant velocity then we'll feel the pull from its current position. This is analogous to the situation in classical electromagnetism, where if you have a charge moving at constant velocity, other charges will be attracted to its current position rather than its retarded position; but this is in effect because the electromagnetic field has a built-in ability to "extrapolate" linear movement, there's no actual signals moving faster than light, and if the charge were to accelerate other charges would continue to be attracted to where the original charge would have been had it continued to move in a straight line, until they receive an "update" on its position in the form of electromagnetic waves. Because electromagnetic waves depend on a dipole moment while gravitational waves depend on a quadrupole moment, the gravitational field can "extrapolate" some more general types of movement than the electromagnetic field, like a spherically symmetric collapsing star, but in any situation where gravitational waves are generated, other objects do not anticipate all the motions, and continue to be attracted to the "wrong" positions until the gravitational waves reach them. And wouldn't one planet smashing into another and knocking it off course generate gravitational waves? See Sources of gravitational waves on wikipedia.

In any case, it seems to me the argument about the light cone structure is pretty airtight. In electromagnetism there is a correlation between the direction one charge A is being pulled at a given moment and the current position of another charge B moving at constant velocity, and these events have a spacelike separation, but this could be explained in terms of the position of the charge B at some previous time, an event in the past light cone of charge A, plus the electromagnetic field's ability to naturally "extrapolate" the position of charge B as long as it keeps moving at constant velocity. But any such dependence on events in the past light cone for Bell experiments would either involve a "conspiracy" in the initial conditions, or it would involve ridiculously complex laws of nature that were somehow "extrapolating" the precise future brain state of the experimenter at the moment of choice using only events in the past light cone of the event of the source emitting particles (and even if you are willing to allow such ridiculously complex laws of nature, this probably doesn't make sense anyway since the brain is a chaotic system and the choice would probably depend on everything in the past light cone of the experimenter's choice at a given time, but at any given time some of the events which lie in the past light cone of the choice-event are outside the past light cone of the event of the source emitting the particles, so even Laplace's demon couldn't predict the choice using only the set of events in the past light cone of the emission-event).
ueit said:
The difference between the "billiard ball" theory and GR is that in the former the particles are not aware of each other (a particle in a distant place has no effect on another) while in the later such awareness exists even beyond the light cone (if no non-deterministic events like nukes are allowed). For example, in the solar system Earth accelerates towards the future position of the Sun, and not towards its retarded position (the place we see the Sun).
But again, the type of motions that GR can "extrapolate" in this way are pretty limited, I think it may just be constant-velocity motion and spherically or cylindrically symmetric acceleration; any type of motion complicated enough to result in gravitational waves cannot be extrapolated in this way, so objects will not be pulled in exactly the direction of other object's current position in these circumstances.
ueit said:
A local, realistic, non-conspiracy type mechanism for EPR would be as follows:

1. every particle in the experimental setup sends a signal, at light speed towards the PDC source. (this is in fact true for every particle in the visible universe)

2. The calcium atom "reads" from those signals the position/momentum for each particle and "computes" their future evolution (including how the detector will be oriented at the time of detection). This might resemble the way Earth "reads" from the space curvature around it the position/momentum of other massive bodies and then "decides" how to accelerate.
But like I said, the more complicated the types of motion you want objects to be able to "extrapolate", the more complicated your fundamental laws have to be; and I think my parenthetical comment about how even Laplace's demon probably couldn't predict the experimenter's choice using only information about events in the past light cone of the source's emission event suggests that even ridiculously complicated laws couldn't do what you're suggesting without "conspiracy-like" restrictions on the initial conditions of the universe.
 
  • #193
JesseM said:
I didn't assume the nuclear explosion was random, though. You are free to assume that whatever non-gravitational forces are involved are also governed by deterministic laws, like classical electromagnetism, which can certainly be incorporated into GR.

Yeah, but introducing additional complexity in my analogy does no good.

The point is just that no matter what your complete set of fundamental laws are, as long as they have a light cone structure, then there should be no statistical correlation between events A and B with a spacelike separation unless there's some event or events in the past light cone of A and B which predetermines them in the right way to create the correlation.

I agree with this. More, I think there is good evidence (the uniformity of microwave background radiation) that all the visible universe passed a period when all its particles were able to "make contact" with each other:
http://en.wikipedia.org/wiki/Inflationary_theory"

In physical cosmology, cosmic inflation is the idea that the nascent universe passed through a phase of exponential expansion that was driven by a negative-pressure vacuum energy density.[1] As a direct consequence of this expansion, all of the observable universe originated in a small causally-connected region. Inflation answers the classic conundrums of the big bang cosmology: why does the universe appear flat, homogeneous and isotropic in accordance with the cosmological principle when one would expect, on the basis of the physics of the big bang, a highly curved, inhomogeneous universe.
(emphasis mine)

And if A is the event of the source emitting particles in a certain state, and B is the event of an experimenter's brain making a choice of which setting to use on a given trial, then explaining the violation of Bell inequalities in terms of such a predetermining event in A and B's past light cone amounts to the "conspiracy" assumption discussed earlier. Do you disagree with any of this?

I disagree with "predetermining event in A and B's past light cone" formulation. All the particles in the universe are correlated with each other from the time of big-bang. Even if those particles are now far from each other, the correlation between their motion remains.

Are you sure about that?

Of course I'm not. It's just one of many possible scenarios.

I believe it is true that if a body is moving at a constant velocity then we'll feel the pull from its current position. This is analogous to the situation in classical electromagnetism, where if you have a charge moving at constant velocity, other charges will be attracted to its current position rather than its retarded position; but this is in effect because the electromagnetic field has a built-in ability to "extrapolate" linear movement, there's no actual signals moving faster than light, and if the charge were to accelerate other charges would continue to be attracted to where the original charge would have been had it continued to move in a straight line, until they receive an "update" on its position in the form of electromagnetic waves. Because electromagnetic waves depend on a dipole moment while gravitational waves depend on a quadrupole moment, the gravitational field can "extrapolate" some more general types of movement than the electromagnetic field, like a spherically symmetric collapsing star, but in any situation where gravitational waves are generated, other objects do not anticipate all the motions, and continue to be attracted to the "wrong" positions until the gravitational waves reach them. And wouldn't one planet smashing into another and knocking it off course generate gravitational waves? See Sources of gravitational waves on wikipedia.

I didn't think about the planet "smashing into" Mars, only passing close enough to significantly alter its orbit. In a collision, a lot of energy is lost as heat, and the analogy wouldn't work (the error introduced by gravity waves is negligible though). Now, GR is only an analogy. I do not claim that the mechanism behind EPR is exactly like GR. In any case, the accelerated motion is "extrapolated" very well by GR so that a non-local mechanism as the one proposed by Newtonian gravity works very well for all but extreme situations like the merging of black holes or neutron stars. I know that it doesn't work perfectly and that's why I specified that for the case I gave you, Newtonian theory is a good approximation. I see no reason to assume that a better or even perfect "extrapolation" of accelerated motion (which is the only possible motion of a point particle except the uniform one) cannot be accomplished by a theory. At least, I know of no proof of that.

In any case, it seems to me the argument about the light cone structure is pretty airtight. In electromagnetism there is a correlation between the direction one charge A is being pulled at a given moment and the current position of another charge B moving at constant velocity, and these events have a spacelike separation, but this could be explained in terms of the position of the charge B at some previous time, an event in the past light cone of charge A, plus the electromagnetic field's ability to naturally "extrapolate" the position of charge B as long as it keeps moving at constant velocity. But any such dependence on events in the past light cone for Bell experiments would either involve a "conspiracy" in the initial conditions, or it would involve ridiculously complex laws of nature that were somehow "extrapolating" the precise future brain state of the experimenter at the moment of choice using only events in the past light cone of the event of the source emitting particles (and even if you are willing to allow such ridiculously complex laws of nature, this probably doesn't make sense anyway since the brain is a chaotic system and the choice would probably depend on everything in the past light cone of the experimenter's choice at a given time, but at any given time some of the events which lie in the past light cone of the choice-event are outside the past light cone of the event of the source emitting the particles, so even Laplace's demon couldn't predict the choice using only the set of events in the past light cone of the emission-event).

When you are speaking about “ridiculously complex laws of nature that were somehow "extrapolating" the precise future brain state of the experimenter at the moment of choice” you are referring to a high level description of facts. The mechanism I propose works at the lowest level. A calcium atom doesn’t “know” anything about brains, computers or experiments; it only “looks” for two suitable absorbers (other atoms) for the entangled photons. When such absorbers are found, a pair of photons is send towards their extrapolated position. That’s all. The chain of events by which those absorbers arrive at their position is irrelevant. You may have a human pushing a button that hits a monkey; then the monkey starts a computer running a random number generator that in turn commands an engine to change the polarizer’s position. If, at low level, the “extrapolation” mechanism works perfectly, or at least with a good enough accuracy the calcium atom would not be “fooled” and Bell’s inequality would be violated.

But again, the type of motions that GR can "extrapolate" in this way are pretty limited, I think it may just be constant-velocity motion and spherically or cylindrically symmetric acceleration; any type of motion complicated enough to result in gravitational waves cannot be extrapolated in this way, so objects will not be pulled in exactly the direction of other object's current position in these circumstances. But like I said, the more complicated the types of motion you want objects to be able to "extrapolate", the more complicated your fundamental laws have to be; and I think my parenthetical comment about how even Laplace's demon probably couldn't predict the experimenter's choice using only information about events in the past light cone of the source's emission event suggests that even ridiculously complicated laws couldn't do what you're suggesting without "conspiracy-like" restrictions on the initial conditions of the universe.

From my answer above I conclude:

1. It should be enough to extrapolate accelerated motion. Other types are not possible for a point particle. Probably even the imperfect extrapolation of GR is enough to explain all experiments to date.
2. “all of the observable universe originated in a small causally-connected region”. This is the event that “links” the whole experimental setup. Since then all particles are correlated with each other because of the “extrapolation” effect.
3. No complicated laws must be postulated to deal with brains or different experimental tricks. Conservation laws, the microscopic equivalent of Mallus’ law plus the extrapolation mechanism will do.
 
Last edited by a moderator:
  • #194
QuantunEnigma said:
Is this Bell's theorem refuted? Expert comment please?

From abstract quant-ph/0703179

Disproof of Bell’s Theorem by Clifford Algebra Valued Local Variables

Joy Christian, Perimeter Institute, 31 Caroline Street North, Waterloo, Ontario N2L 2Y5, Canada, and Department of Physics, University of Oxford, Parks Road, Oxford OX1 3PU, England

It is shown that Bell’s theorem fails for the Clifford algebra valued local realistic variables. This is made evident by exactly reproducing quantum mechanical expectation value for the EPR-Bohm type spin correlations observable by means of a local, deterministic, Clifford algebra valued variable, without necessitating either remote contextuality or backward causation. Since Clifford product of multivector variables is non-commutative in general, the spin correlations derived within our locally causal model violate the CHSH inequality just as strongly as their quantum mechanical counterparts.
Let me make a comment on the Clifford-valued local realistic variables.
Although I have not completely understood the paper, it is not a surprise to me that local Clifford-valued realistic variables may simulate QM. This is because, in a sense, non-commuting variables are never truly local, even if they are local formally. Let me explain what I mean by this:
A formally local quantity is a quantity of the form A(x) or B(y), where x and y are positions of the first and the second particle, respectively. Now, if they are not commuting, then
[tex] A(x)B(y) \neq B(y)A(x)[/tex]
But how two quantities A and B know that they should not commute if x is very far from y? This knowledge is a sort of nonlocality as well.

My opinion is that realistic variables (local or not) must be not only commuting, but represented by real numbers. This is because they are supposed to be measurable, while a measurable quantity must be a real number. Therefore, I believe that the Clifford-valued realistic variables are physically meaningless.

In fact, the claim that physical variables could be noncommuting numbers does not differ much from the claim that physical variables could be noncommuting operators or noncommuting matrices. But this is exactly what the realistic physical variables in QM are NOT supposed to be, because otherwise we deal with QM in the usual matrix/operator form.
 
Last edited:
  • #195
ueit said:
I do not need to provide a physically plausible local-realistic mechanism for EPR, only a logically consistent one (without appealing to conspiracies as these are extremely non-parsimonious). That is enough to prove that your assertion regarding the applicability of Bell's theorem (in spite of what Bell himself said) is false.

Well, I'm not too sure how "logically consistent" your mechanism is if it can't meet my simple test (in which each possible outcome probability is non-negative).

I place your hypothesis in the bin with the other "proofs" that Bell is not applicable.
 
  • #196
Demystifier said:
My opinion is that realistic variables (local or not) must be not only commuting, but represented by real numbers. This is because they are supposed to be measurable, while a measurable quantity must be a real number. Therefore, I believe that the Clifford-valued realistic variables are physically meaningless.

In fact, the claim that physical variables could be noncommuting numbers does not differ much from the claim that physical variables could be noncommuting operators or noncommuting matrices. But this is exactly what the realistic physical variables in QM are NOT supposed to be, because otherwise we deal with QM in the usual matrix/operator form.

Well said.

I am always amazed at new hypotheses (such as Christian's) which purport to show a local realistic scenario which agree with the predictions of QM - yet do not discuss the negative probabilities which result when the observer freely chooses between measurement settings. The entire realistic argument IS that the observer could do this! That is what everyone cares about - whether the results are observer dependent or not. So when the observer independence issue is magically dropped (in this case by having non-commutivity), it is no big surprise that Bell is bypassed in the results. Of course, that is why Bell's Theorem is so important. His assumptions are very straightforward and easy to agree with.
 
  • #197
ueit said:
I agree with this. More, I think there is good evidence (the uniformity of microwave background radiation) that all the visible universe passed a period when all its particles were able to "make contact" with each other
Unless I'm misunderstanding something, that doesn't mean that there was any time when all the events in the past light cone of the event of the experimenter making a choice of what to measure were also in the past light cone of the event of the the source sending out the particles. Again, if you don't place any special constraints on initial conditions, then even in a deterministic universe, a Laplacian demon with knowledge of everything in the past light cone of the source sending out the particles would not necessarily be able to predict the brain state of the experimenter at the time he made his choice of what to measure. Do you disagree?
ueit said:
I disagree with "predetermining event in A and B's past light cone" formulation. All the particles in the universe are correlated with each other from the time of big-bang. Even if those particles are now far from each other, the correlation between their motion remains.
"Correlated" is too vague. I think that inflationary theory would say that the past light-cones of the most widely-separated events we can see will partially overlap, so that the similarity of the CMBR in different regions can have a common past cause. But again, it doesn't mean that knowing the past light cone of one event would allow you to predict every other event, even in a perfectly deterministic universe, because any pair of spacelike separated events would have parts of their past light cones that are outside the past light cone of the other event. (This is assuming you don't try to define the past light cone of each event at the exact time of the initial singularity itself, since the singularity doesn't seem to have a state that could allow you to extrapolate later events by knowing it...for every time slice after the singularity, though, knowing the complete physical state of a region of space would allow you to predict any future event whose past light cone lies entirely in that region, in a deterministic universe.)
JesseM said:
Are you sure about that?
ueit said:
Of course I'm not. It's just one of many possible scenarios.
I was asking if you were sure about your claim that in the situation where Mars was deflected by a passing body, the Earth would continue to feel a gravitational pull towards Mars' present position rather than its retarded position, throughout the process. This is a question about GR that would presumably have a single correct answer, so I'm not sure what you mean by "many possible scenarios"--perhaps you misunderstood what I was asking.
ueit said:
I didn't think about the planet "smashing into" Mars, only passing close enough to significantly alter its orbit.
That's fine, but like I said, my understanding is that GR can only "extrapolate" constant-velocity motion or situations involving acceleration which are spherically or cylindrically symmetric. I don't see how the situation of Mars being deflected from its orbit by a passing body could exhibit this kind of symmetry, so I'm pretty sure the Earth would not continue to be pulled towards Mars' present position throughout the process.
ueit said:
In any case, the accelerated motion is "extrapolated" very well by GR so that a non-local mechanism as the one proposed by Newtonian gravity works very well for all but extreme situations like the merging of black holes or neutron stars.
It only works as an approximation. If you're claiming that it works in the specific sense of objects continuing to be pulled towards other object's present positions rather than retarded positions, I believe you're wrong about that--again, the "extrapolation" only happens in the case of constant velocity or spherically/cylindrically symmetric motion AFAIK.
ueit said:
When you are speaking about “ridiculously complex laws of nature that were somehow "extrapolating" the precise future brain state of the experimenter at the moment of choice” you are referring to a high level description of facts. The mechanism I propose works at the lowest level. A calcium atom doesn’t “know” anything about brains, computers or experiments; it only “looks” for two suitable absorbers (other atoms) for the entangled photons. When such absorbers are found, a pair of photons is send towards their extrapolated position. That’s all.
By "complexity" I was referring to the mathematical complexity of the laws involved. We could say that in electromagnetism a charged particle "knows" where another particle would be now if it kept moving at constant velocity, and in GR a test particle "knows" where the surface of a collapsing shell would be if it maintains spherical symmetry; there isn't a literal calculation of this of course, but the laws are such that the particles act as if they know in terms of what direction they are pulled. In order for the source to act as though it knows the orientation of a distant polarizer which was fixed by the brain of a human experimenter, then even if we ignore the issue of some events in the past light cone of the experimenter's choice being outside the past light cone of the source emitting the particles, the "extrapolation" here would be far more complicated because of the extremely complicated and non-symmetrical motions of all the mutually interacting particles in the experimenter's brain which must be extrapolated from some past state, and presumably the laws that would make the source act this way would not have anything like the simplicity of electromagnetism or GR. We could think in terms of algorithmic complexity, for example--the local rules in a cellular-automata program simulating EM or GR would not require a hugely long program (although the actual calculations for a large number of 'cells' might require a lot of computing power), while it seems to me that the sort of rules you're imagining would involve a much, much longer program just to state the fundamental local rules.
ueit said:
1. It should be enough to extrapolate accelerated motion. Other types are not possible for a point particle. Probably even the imperfect extrapolation of GR is enough to explain all experiments to date.
You refer to "imperfect" extrapolation, but I'm pretty sure it's not as if GR can kinda-sorta extrapolate accelerations that aren't perfectly spherically or cylindrically symmetric, it's an all-or-nothing deal, just like with EM where the extrapolation is to where the other particle would be if it kept moving at an exactly constant velocity, not somewhere between a constant velocity and its true acceleration. GR wouldn't in any way begin to extrapolate the current positions of particles which are accelerating in all sorts of different directions in a non-symmetric way, with the direction and magnitude of each particle's acceleration always changing due to interactions with other particles (like all the different molecules and electrons in your brain).

And of course, even if you set things up so the detector angle was determined by some simple mechanism which GR could extrapolate, like the radius of a collapsing star at the moment the source emits its particles, the "extrapolation" just refers to where other objects will experience a gravitational pull, what sort of laws do you propose that would allow the source to "know" that the detector angle depends on this variable, and to modify the hidden variables based on the detector angles? Obviously there's nothing in GR itself that could do this.
ueit said:
2. “all of the observable universe originated in a small causally-connected region”. This is the event that “links” the whole experimental setup. Since then all particles are correlated with each other because of the “extrapolation” effect.
See above--like I said, this doesn't mean that knowing the past light cone of one event would allow you to automatically predict the outcome of another event with a spacelike separation from the first. The regions of the two past light cones will overlap in the very early universe, but there will be no finite moment after the singularity where the regions encompassed by the two past light cones at that moment are identical, there will always be some points in the past light cone of one that are outside the past light cone of the other. If the event we're talking about is the product of a nonlinear system exhibiting sensitive dependence on initial conditions like the brain, then it seems to me that even in a deterministic universe you'd need to know the complete state of the region of space inside the past light cone at an earlier time in order to predict the event. This is why I think that even Laplace's demon could not predict what the detector setting would be if he only knew about events in the past light cone of the source emitting the entangled particles. Do you disagree, and if so, why?
 
Last edited:
  • #198
Demystifier said:
Let me make a comment on the Clifford-valued local realistic variables.
Although I have not completely understood the paper, it is not a surprise to me that local Clifford-valued realistic variables may simulate QM. This is because, in a sense, non-commuting variables are never truly local, even if they are local formally. Let me explain what I mean by this:
A formally local quantity is a quantity of the form A(x) or B(y), where x and y are positions of the first and the second particle, respectively. Now, if they are not commuting, then
[tex] A(x)B(y) \neq B(y)A(x)[/tex]
But how two quantities A and B know that they should not commute if x is very far from y? This knowledge is a sort of nonlocality as well.
Well, there are some problems with this. First, a RAA: let's suppose A and B are real-valued functions. How can A and B know that they should commute if x is very far from y?

Anyways, this is very simple. If A and B are Clifford-valued functions, (or if they are real-valued functions), then A(x) and B(y) are numbers. I repeat, they are not numbers located someplace in space-time: they are simply numbers.

OTOH, if A and B took values in a line bundle, so that A(x) is a number located someplace in space-time, then A(x)B(y) is nonsensical: we need a connection (and a path from x to y) to transport a value at x to the fiber at y before we can do any arithmetic with them. (This is true, even if our line bundle is of real numbers)




My opinion is that realistic variables (local or not) must be not only commuting, but represented by real numbers. This is because they are supposed to be measurable, while a measurable quantity must be a real number.
Just to be clear -- is "a measurable quantity must be a real number" your opinion, or are you claiming that as fact?
 
Last edited:
  • #199
Hurkyl said:
Just to be clear -- is "a measurable quantity must be a real number" your opinion, or are you claiming that as fact?
It is my opinion. But I am quite certain about it, so I would even dare to claim that it is a fact.
 
  • #200
DrChinese said:
Well, I'm not too sure how "logically consistent" your mechanism is if it can't meet my simple test (in which each possible outcome probability is non-negative).

My mechanism is as follows:

The PDC source generates photon pairs that obey Malus’ law (n = cos^2(alpha)), where:

n = the probability that the two photons have the same spin on the two measurement axes.

alpha = angle between the polarizers.

The detectors' settings are not communicated non-locally but are "extrapolated" from the past state of the system.

This mechanism is therefore local, realistic (the photons had the measured spin all along) and gives the same predictions as QM, but would not pass your test. This is because your constraints are irrelevant as locality and realism are concerned. It is the statistical independence assumption that requires the probabilities to add to 100% and my mechanism denies this.

I place your hypothesis in the bin with the other "proofs" that Bell is not applicable.

There is nothing to prove. Bell himself clearly stated that the theorem depends of the assumption of statistical independence. You seem not to be able to accept this, for a reason I can't understand.

Bell J., Speakable And Unspeakable In Quantum Mechanics, p. 100:

It has been argued the QM is not locally causal and cannot be embedded in a local causal theory. That conclusion depends on treating certain experimental parameters, typically the orientations of polarization filters, as free variables
(emphasis mine)

Please read carefully the above quote and try to understand the irrelevance of your "test" in my case.
 
  • #201
ueit said:
There is nothing to prove. Bell himself clearly stated that the theorem depends of the assumption of statistical independence. [DrChinese seems] not to be able to accept this, for a reason I can't understand.

Among other reasons, people find strong determinism unpalatable because it is not useful for producing testable theories. Of course, the same is true for MWI, which people seem to have much less trouble with.

Although it's not part of Bell's Theorem, the assumption that pairs of entangled particles can be space-like separated is untested (and possibly untestable), but necessary for valid Bell experiments.
 
  • #202
NateTG said:
Among other reasons, people find strong determinism unpalatable because it is not useful for producing testable theories. Of course, the same is true for MWI, which people seem to have much less trouble with.
What does the assumption of statistical independence between spacelike separated events have to do with strong determinism? This statistical independence would be expected even in a completely deterministic universe, unless some past event that influenced both later events caused the correlation (but I explained in my last post to ueit why this doesn't seem to work if one event is that of a human brain making a choice, or any other event with sensitive dependence on initial conditions).
NateTG said:
Although it's not part of Bell's Theorem, the assumption that pairs of entangled particles can be space-like separated is untested (and possibly untestable), but necessary for valid Bell experiments.
What do you mean by "the assumption that pairs of entangled partices can be space-like separated"? Spacelike separation only applies to events, not particles with extended worldlines. And there's no disputing that the event of the experimenter choosing a detector setting and the source emitting the particles can be spacelike separated, all you have to do is find the coordinates of each event and verify that the spatial distance between them is greater than c^2 times the time interval between them, that's all that "spacelike separated" means.
 
  • #203
ueit said:
My mechanism is as follows:

The PDC source generates photon pairs that obey Malus’ law (n = cos^2(alpha)), where:

n = the probability that the two photons have the same spin on the two measurement axes.

alpha = angle between the polarizers.

The detectors' settings are not communicated non-locally but are "extrapolated" from the past state of the system.

This is absurd, and has been already ruled out by experiment (Aspect and many subsequent variations).

Detector orientations were changed mid-flight so it is too late for them to in any way be related to the state of the system at the time the entangled photons were created. You are certainly free to reject generally accepted science, but you should not expect others to follow suit without a better argument than that. Your hypothesis is akin to "intelligent design" arguments: there is no evidence to support your viewpoint - and all evidence that should be there is completely missing.
 
  • #204
DrChinese said:
This is absurd, and has been already ruled out by experiment (Aspect and many subsequent variations).

Detector orientations were changed mid-flight so it is too late for them to in any way be related to the state of the system at the time the entangled photons were created. You are certainly free to reject generally accepted science, but you should not expect others to follow suit without a better argument than that. Your hypothesis is akin to "intelligent design" arguments: there is no evidence to support your viewpoint - and all evidence that should be there is completely missing.
What ueit is proposing is that the source is able to predict the actions of the experimenters in advance, including anything they do while the photons are in mid-flight--this is basically similar to the retrocausality loophole in Bell's theorem (ueit bases his idea on a half-baked analogy with the way the electromagnetic and gravitational forces allow objects to "extrapolate" certain limited kinds of motions of other objects--see this article from John Baez's site). But aside from this requiring absurdly complicated laws of physics, I've already pointed out to ueit that it won't work without "conspiracies" in initial conditions, because there are events in the past light cone of the experimenter's choice of detector setting that are outside the past light cone of the source's emitting the particles, so that even assuming perfect determinism, a Laplacian demon sitting at the source would not be able to predict the experimenter's choice given knowledge of everything in the past light cone of the emission-event, all the way back to the Big Bang.
 
Last edited:
  • #205
JesseM said:
What ueit is proposing is that the source is able to predict the actions of the experimenters in advance, including anything they do while the photons are in mid-flight--this is basically similar to the retrocausality loophole in Bell's theorem (ueit bases his idea on a half-baked analogy with the way the electromagnetic and gravitational forces allow objects to "extrapolate" certain limited kinds of motions of other objects--see this article from John Baez's site). But aside from this requiring absurdly complicated laws of physics, I've already pointed out to ueit that it won't work without "conspiracies" in initial conditions, because there are events in the past light cone of the experimenter's choice of detector setting that are outside the past light cone of the source's emitting the particles, so that even assuming perfect determinism, a Laplacian demon sitting at the source would not be able to predict the experimenter's choice given knowledge of everything in the past light cone of the emission-event, all the way back to the Big Bang.

Yes, I quite agree with you. I thought you raised several good points about the entire concept (for instance, the idea that there are many light cones which start to come into play, not just the common one).

But ueit's idea is STILL absurd because it is not really science. There is not the slightest bit of evidence that the polarizer settings have any causal connection to the creation of the entangled particles - nor does that of any other prior event (or set of events) whatsoever. You may as well say that God wants to trick us about the cos^2 theta relationship (which is your "conspiracy" concept) as this relationship evolves regardless of whether the polarizers are set randomly by computer or by human hand.

The fact is, QM specifies the cos^2 theta relationship while ueit's "hypothesis" actually does not. The same hypothesis also fails to accurately predict a single physical law or any known phenomena whatsoever. Additionally, there is no known mechanism by which such causality can be transmitted. Ergo, I personally do not believe it qualifies for discussion on this board.
 
  • #206
What do you mean by "the assumption that pairs of entangled partices can be space-like separated"? Spacelike separation only applies to events, not particles with extended worldlines. And there's no disputing that the event of the experimenter choosing a detector setting and the source emitting the particles can be spacelike separated.

It's a bit silly, but suppose, for a moment, that whenever two particles are entangled, there is a very tiny wormhole connecting them so that although these two particles appear to be seperated, they are both really aspects of a single particle and local . Of course, these wormholes would have to have some rather odd properties, but, at that point it's impossible to have spacelike separation between the measurements.
 
  • #207
NateTG said:
It's a bit silly, but suppose, ...tiny wormhole connecting them so that although these two particles appear to be seperated, they are both really aspects of a single particle and local .
If true Bell is still correct, the "and local" you refer to is not "Bell Local". Your discribing a reality that is still not local and realistic, just as a theory like BM can use guide waves or MWI can use extra dimensions to create a their own version of "local". But not "local and realistic" in the Classical meaning. Which is all Bell is disigned to test for. The definition of QM is Bell non-local.
 
  • #208
RandallB said:
If true Bell is still correct, the "and local" you refer to is not "Bell Local". Your discribing a reality that is still not local and realistic, just as a theory like BM can use guide waves or MWI can use extra dimensions to create a their own version of "local". But not "local and realistic" in the Classical meaning. Which is all Bell is disigned to test for.

Just to be clear, this an attempt to illustrate an 'unstated assumption' of Bell's Theorem and not an attempt to refute it, QM, or any experimental results.

Although the presence of wormholes can cause some problems with causality, even Einstein knew that they were consistent with GR (and are thus local). AFAICT It's a bit ambiguous whether this qualifies as Bell-local because people generally assume that pairs of entangled particles can be space-like separated.

Now, if for every pair of particles A and B there is a zero length wormhole W between them, then the order of measurements on A or B is no longer dependent on the observer's frame of reference. Instead the measurement of A and the measurement of B can be considered to occur twice for any particular frame of reference (although each is only observable once). Since the 'first measurement' is well-defined, it's trivial to assign values to the particles at that point.
 
Last edited:
  • #209
Demystifier said:
Let me make a comment on the Clifford-valued local realistic variables.

Although I have not completely understood the paper, it is not a surprise to me that local Clifford-valued realistic variables may simulate QM. This is because, in a sense, non-commuting variables are never truly local, even if they are local formally. Let me explain what I mean by this:

A formally local quantity is a quantity of the form A(x) or B(y), where x and y are positions of the first and the second particle, respectively. Now, if they are not commuting, then
[tex] A(x)B(y) \neq B(y)A(x)[/tex]

But how two quantities A and B know that they should not commute if x is very far from y? This knowledge is a sort of nonlocality as well.

My opinion is that realistic variables (local or not) must be not only commuting, but represented by real numbers. This is because they are supposed to be measurable, while a measurable quantity must be a real number. Therefore, I believe that the Clifford-valued realistic variables are physically meaningless.

In fact, the claim that physical variables could be noncommuting numbers does not differ much from the claim that physical variables could be noncommuting operators or noncommuting matrices. But this is exactly what the realistic physical variables in QM are NOT supposed to be, because otherwise we deal with QM in the usual matrix/operator form.

A paper has been added to the archive that discusses some of this in greater depth and may be of interest:

Title: Non-Viability of a Counter-Argument to Bell's Theorem
 
  • #210
Unless I'm misunderstanding something, that doesn't mean that there was any time when all the events in the past light cone of the event of the experimenter making a choice of what to measure were also in the past light cone of the event of the the source sending out the particles. Again, if you don't place any special constraints on initial conditions, then even in a deterministic universe, a Laplacian demon with knowledge of everything in the past light cone of the source sending out the particles would not necessarily be able to predict the brain state of the experimenter at the time he made his choice of what to measure. Do you disagree?

I think that inflationary theory would say that the past light-cones of the most widely-separated events we can see will partially overlap, so that the similarity of the CMBR in different regions can have a common past cause. But again, it doesn't mean that knowing the past light cone of one event would allow you to predict every other event, even in a perfectly deterministic universe, because any pair of spacelike separated events would have parts of their past light cones that are outside the past light cone of the other event. (This is assuming you don't try to define the past light cone of each event at the exact time of the initial singularity itself, since the singularity doesn't seem to have a state that could allow you to extrapolate later events by knowing it...for every time slice after the singularity, though, knowing the complete physical state of a region of space would allow you to predict any future event whose past light cone lies entirely in that region, in a deterministic universe.)

Juao Magueijo’s article “Plan B for the cosmos” (Scientific American, Jan. 2001, p.47) reads:

Inflationary theory postulates that the early universe expanded so fast that the range of light was phenomenally large. Seemingly disjointed regions could thus have communicated with one another and reached a common temperature and density. When the inflationary expansion ended, these regions began to fall out of touch.

It does not take much thought to realize that the same thing could have been achieved if light simply had traveled faster in the early universe than it does today. Fast light could have stitched together a patchwork of otherwise disconnected regions. These regions could have homogenized themselves. As the speed of light slowed, those regions would have fallen out of contact

It is clear from the above quote that the early universe was in thermal equilibrium. That means that there was enough time for the EM field of each particle to reach all other particles (it only takes light one second to travel between two opposite points on a sphere with a diameter of 3 x 10^8 m but this time is hardly enough to bring such a sphere of gas at an almost perfect thermal equilibrium). A Laplacian demon “riding on a particle” could infer the position/momentum of every other particle in that early universe by looking at the field around him. This is still true today because of the extrapolation mechanism.

I also disagree that “the singularity doesn't seem to have a state that could allow you to extrapolate later events by knowing it”. We don’t have a theory to describe the big-bang so I don’t see why we should assume that it was a non-deterministic phenomena rather than a deterministic one. If QM is deterministic after all I don’t see where a stochastic big-bang could come from.

To summarize, my questions are:

1. Do we have compelling evidence that the big-bang was a non-deterministic process?
2. Does the present evidence exclude the possibility that “past light-cones of the most widely-separated events” overlap completely?

If the answer to any of the these questions is “no” my hypothesis stands.

I was asking if you were sure about your claim that in the situation where Mars was deflected by a passing body, the Earth would continue to feel a gravitational pull towards Mars' present position rather than its retarded position, throughout the process.

Yes, because this is a case where Newtonian theory applies well (small mass density). I’m not accustomed with GR formalism but I bet that the difference between the predictions of the two theories is very small.

This is a question about GR that would presumably have a single correct answer, so I'm not sure what you mean by "many possible scenarios"--perhaps you misunderstood what I was asking.

Yes, I misunderstood you.

It only works as an approximation. If you're claiming that it works in the specific sense of objects continuing to be pulled towards other object's present positions rather than retarded positions, I believe you're wrong about that--again, the "extrapolation" only happens in the case of constant velocity or spherically/cylindrically symmetric motion AFAIK.

In Newtonian gravity the force is instantaneous. So, yes, in any system for which Newtonian gravity is a good approximation the objects are “pulled towards other object's present positions”. The article you linked from John Baez’s site claims that uniform accelerated motion is extrapolated by GR as well.

By "complexity" I was referring to the mathematical complexity of the laws involved. We could say that in electromagnetism a charged particle "knows" where another particle would be now if it kept moving at constant velocity, and in GR a test particle "knows" where the surface of a collapsing shell would be if it maintains spherical symmetry; there isn't a literal calculation of this of course, but the laws are such that the particles act as if they know in terms of what direction they are pulled. In order for the source to act as though it knows the orientation of a distant polarizer which was fixed by the brain of a human experimenter, then even if we ignore the issue of some events in the past light cone of the experimenter's choice being outside the past light cone of the source emitting the particles, the "extrapolation" here would be far more complicated because of the extremely complicated and non-symmetrical motions of all the mutually interacting particles in the experimenter's brain which must be extrapolated from some past state, and presumably the laws that would make the source act this way would not have anything like the simplicity of electromagnetism or GR. We could think in terms of algorithmic complexity, for example--the local rules in a cellular-automata program simulating EM or GR would not require a hugely long program (although the actual calculations for a large number of 'cells' might require a lot of computing power), while it seems to me that the sort of rules you're imagining would involve a much, much longer program just to state the fundamental local rules.

EM extrapolates uniform motion, GR uniform accelerated motion. I’m not a mathematician so I have no idea if a mechanism able to extrapolate a generic accelerated motion should necessarily be as complex or so difficult to simulate on a computer as you imply. You are, of course, free to express an opinion but at this point I don’t think you’ve put forward a compelling argument.

You refer to "imperfect" extrapolation, but I'm pretty sure it's not as if GR can kinda-sorta extrapolate accelerations that aren't perfectly spherically or cylindrically symmetric, it's an all-or-nothing deal, just like with EM where the extrapolation is to where the other particle would be if it kept moving at an exactly constant velocity, not somewhere between a constant velocity and its true acceleration. GR wouldn't in any way begin to extrapolate the current positions of particles which are accelerating in all sorts of different directions in a non-symmetric way, with the direction and magnitude of each particle's acceleration always changing due to interactions with other particles (like all the different molecules and electrons in your brain).

If what you are saying is true then we should expect Newtonian gravity to miserably fail when dealing with a non-uniform accelerated motion, like a planet in an elliptical orbit, right? Anyway, probably you are right that an imperfect extrapolation would be useless because of chaos, so a mechanism able to perfectly extrapolate accelerated motion is required.

And of course, even if you set things up so the detector angle was determined by some simple mechanism which GR could extrapolate, like the radius of a collapsing star at the moment the source emits its particles, the "extrapolation" just refers to where other objects will experience a gravitational pull, what sort of laws do you propose that would allow the source to "know" that the detector angle depends on this variable, and to modify the hidden variables based on the detector angles? Obviously there's nothing in GR itself that could do this.

I have no idea of how the mathematical implementation of such a mechanism would look like. Probably one could start with the Cramer’s transactional interpretation and replace the advanced wave that is send by the absorber back in time towards the emitter with a “normal”, retarded wave coming from the detector prior to emission and make the emission event depend on the “extrapolated” position of the absorber.

See above--like I said, this doesn't mean that knowing the past light cone of one event would allow you to automatically predict the outcome of another event with a spacelike separation from the first. The regions of the two past light cones will overlap in the very early universe, but there will be no finite moment after the singularity where the regions encompassed by the two past light cones at that moment are identical, there will always be some points in the past light cone of one that are outside the past light cone of the other. If the event we're talking about is the product of a nonlinear system exhibiting sensitive dependence on initial conditions like the brain, then it seems to me that even in a deterministic universe you'd need to know the complete state of the region of space inside the past light cone at an earlier time in order to predict the event. This is why I think that even Laplace's demon could not predict what the detector setting would be if he only knew about events in the past light cone of the source emitting the entangled particles. Do you disagree, and if so, why?

I disagree, see my two questions above.
 

Similar threads

Replies
2
Views
945
Replies
7
Views
1K
Replies
50
Views
3K
Replies
49
Views
2K
Replies
21
Views
3K
Replies
11
Views
2K
Replies
6
Views
1K
  • Quantum Physics
Replies
12
Views
2K
  • Quantum Physics
2
Replies
47
Views
4K
Replies
6
Views
2K
Back
Top