# OK Corral: Local versus non-local QM

paw
....There is not a single shred of evidence, direct, or indirect, that a wavefunction physically exists....
Have I missed something here? Aren't there solutions of the Schroedinger equation that predict accurately the orbitals of the hydrogen atom, the electron density of H2? Didn't solutions of the SE allow development of the scanning tunnelling EM? Aren't DeBroglie waves wavefunctions? Don't they accurately predict the various interferance patterns of electrons? I think the evidence is almost overwhelming.

Of course you could argue that all these phenomena are caused by some property that behaves exactly like the wavefunction, but a gentleman by the name of Occam had something to say about that.If it walks like a duck and quacks like a duck.......

JesseM
Juao Magueijo’s article “Plan B for the cosmos” (Scientific American, Jan. 2001, p.47) reads:
Inflationary theory postulates that the early universe expanded so fast that the range of light was phenomenally large. Seemingly disjointed regions could thus have communicated with one another and reached a common temperature and density. When the inflationary expansion ended, these regions began to fall out of touch.

It does not take much thought to realize that the same thing could have been achieved if light simply had traveled faster in the early universe than it does today. Fast light could have stitched together a patchwork of otherwise disconnected regions. These regions could have homogenized themselves. As the speed of light slowed, those regions would have fallen out of contact
It is clear from the above quote that the early universe was in thermal equilibrium. That means that there was enough time for the EM field of each particle to reach all other particles (it only takes light one second to travel between two opposite points on a sphere with a diameter of 3 x 10^8 m but this time is hardly enough to bring such a sphere of gas at an almost perfect thermal equilibrium). A Laplacian demon “riding on a particle” could infer the position/momentum of every other particle in that early universe by looking at the field around him. This is still true today because of the extrapolation mechanism.
Your logic here is faulty--even if the observable universe had reached thermal equilibrium, that definitely doesn't mean that each particle's past light cone would become identical at some early time. This is easier to see if we consider a situation of a local region reaching equilibrium in SR. Suppose at some time t0 we fill a box many light-years long with an inhomogenous distribution of gas, and immediately seal the box. We pick a particular region which is small compared to the entire box--say, a region 1 light-second wide--and wait just long enough for this region to get very close to thermal equilibrium. The box is much larger than the region so this will not have been long enough for the whole thing to reach equilibrium, so perhaps there will be large-scale gradients in density/pressure/temperature etc., even if any given region 1 light-second wide is very close to homogenous.

So, does this mean that if we take two spacelike-separated events inside the region which happen after it has reached equilibrium, we can predict one by knowing the complete light cone of the other? Of course not--this scenario is based entirely on the flat spacetime of SR, so it's easy to see that for any spacelike-separated events in SR, there must be events in the past light cone of one which lie outside the past light cone of the other, no matter how far back in time you go. In fact, as measured in the inertial frame where the events are simultaneous, the distance between the two events must be identical to the distance between the edges of the two past light cones at all earlier times. Also, if we've left enough time for the 1 light-second region to reach equilibrium, this will probably be a lot longer than 1 second, meaning the size of each event's past light cone at t0 will be much larger than the 1 light-second region itself.

The situation is a little more complicated in GR due to curved spacetime distorting the light cones (look at some of the diagrams on Ned Wright's Cosmology Tutorial, for example), but I'm confident you wouldn't see two light cones smoothly join up and encompass identical regions at earlier times--it seems to me this would imply at at the event of the joining-up, this would mean photons at the same position and moving in the same direction would have more than one possible geodesic path (leading either to the first event or the second event), which isn't supposed to be possible. In any case, your argument didn't depend specifically on any features of GR, it just suggested that if the universe had reached equilibrium this would mean that knowing the past light cone of one event in the region would allow a Laplacian demon to predict the outcome of another spacelike-separated event, but my SR example shows this doesn't make sense.
ueit said:
I also disagree that “the singularity doesn't seem to have a state that could allow you to extrapolate later events by knowing it”. We don’t have a theory to describe the big-bang so I don’t see why we should assume that it was a non-deterministic phenomena rather than a deterministic one. If QM is deterministic after all I don’t see where a stochastic big-bang could come from.
I wasn't saying anything about the big bang being stochastic, just about the initial singularity in GR being fairly "featurless", you can't extrapolate the later state of the universe from some sort of description of the singularity itself--this doesn't really mean GR is non-deterministic, you could just consider the singularity to not be a part of the spacetime manifold, but more like a point-sized "hole" in it. Of course GR's prediction of a "singularity" may be wrong, but in that case the past light cones of different events wouldn't converge on a single point of zero volume in the same way, so as long as we assume the new theory still has a light cone structure, we're back to my old argument about the past light cones of spacelike-separated events never becoming identical.
JesseM said:
I was asking if you were sure about your claim that in the situation where Mars was deflected by a passing body, the Earth would continue to feel a gravitational pull towards Mars' present position rather than its retarded position, throughout the process.
ueit said:
Yes, because this is a case where Newtonian theory applies well (small mass density). I’m not accustomed with GR formalism but I bet that the difference between the predictions of the two theories is very small.
Probably, but that doesn't imply that GR predicts that the Earth will be attracted to Mars' current position, since after all one can ignore Mars altogether in Newtonian gravity and still get a very good prediction of the movement of the Earth. If you really think it's plausible that GR predicts Earth can "extrapolate" the motions of Mars in this situation which obviously departs significantly from spherical/cylindrical symmetry, perhaps we should start a thread on the relativity forum to get confirmation from GR experts over there?
ueit said:
In Newtonian gravity the force is instantaneous. So, yes, in any system for which Newtonian gravity is a good approximation the objects are “pulled towards other object's present positions”.
You're talking as though the only reason Newtonian gravity could fail to be a good approximation is because of the retarded vs. current position issue! But there are all kinds of ways in which GR departs wildly from Newtonian gravity which have nothing to do with this issue, like the prediction that sufficiently massive objects can form black holes, or the prediction of gravitational time dilation. And the fact is that the orbit of a given planet can be approximated well by ignoring the other planets altogether (or only including Jupiter), so obviously the issue of the Earth being attracted to the current vs. retarded position of Mars is going to have little effect on our predictions.
ueit said:
The article you linked from John Baez’s site claims that uniform accelerated motion is extrapolated by GR as well.
Well, the wikipedia article says:
In general terms, gravitational waves are radiated by objects whose motion involves acceleration, provided that the motion is not perfectly spherically symmetric (like a spinning, expanding or contracting sphere) or cylindrically symmetric (like a spinning disk).
So either one is wrong or we're misunderstanding what "uniform acceleration" means...is it possible that Baez was only talking about uniform acceleration caused by gravity as opposed to other forces, and that gravity only causes uniform acceleration in an orbit situation which also has spherical/cylindrical symmetry? I don't know the answer, this might be another question to ask on the relativity forum...in any case, I'm pretty sure that the situation you envisioned where Mars is deflected from its orbit by a passing body would not qualify as either "uniform acceleration" or "spherically/cylindrically symmetric".
ueit said:
EM extrapolates uniform motion, GR uniform accelerated motion. I’m not a mathematician so I have no idea if a mechanism able to extrapolate a generic accelerated motion should necessarily be as complex or so difficult to simulate on a computer as you imply. You are, of course, free to express an opinion but at this point I don’t think you’ve put forward a compelling argument.
You're right that I don't have a rigorous argument, but I'm just using the following intuition--if you know the current position of an object moving at constant velocity, how much calculation would it take to predict its future position under the assumption it continued to move at this velocity? How much calculation would it take to predict the future position of an object which we assume is undergoing uniform acceleration? And given a system involving many components with constantly-changing accelerations due to constant interactions with each other, like water molecules in a jar or molecules in a brain, how much calculation would it take to predict the future position of one of these parts given knowledge of the system's state in the past. Obviously the amount of calculation needed in the third situation is many orders of magnitude greater than in the first two.
ueit said:
If what you are saying is true then we should expect Newtonian gravity to miserably fail when dealing with a non-uniform accelerated motion, like a planet in an elliptical orbit, right?
No. If our predictions don't "miserably fail" when we ignore Mars altogether, they aren't going to miserably fail if we predict the Earth is attracted to Mars' current position as opposed to where GR says it should be attracted to, which is not going to be very different anyway since a signal from Mars moving at the speed of light takes a maximum of 22 minutes to reach Earth according to this page. Again, in the situations where GR and Newtonian gravity give very different predictions, this is not mainly because of the retarded vs. current position issue.

Last edited:
JesseM
As far as I can tell, ueit is basically arguing an interpretation of arbitrary strong determinism, which is then made local by assuming that each particle consults it's own model of the entire universe.
But if it's really local, each particle should only be allowed to consult its own model of everything in its past light cone back to the Big Bang--for the particle to have a model of anything outside that would require either nonlocality or a "conspiracy" in initial conditions. As I argued in my last post, ueit's argument about thermal equilibrium in the early universe establishing that all past light cones merge and become identical at some point doesn't make sense.

As far as I can tell, ueit is basically arguing an interpretation of arbitrary strong determinism, which is then made local by assuming that each particle consults it's own model of the entire universe. In effect, each particle carries some hidden state $\vec{h}$ which corresponds to a complete list of the results of 'wavefunction collapses'.
It's not exactly what I propose. Take the case of gravity in a Newtonian framework. Each object "knows" where all other objects are, instantaneously. It then acts as if it's doing all the calculations, applying the inverse square law. General relativity explains this apparently non-local behavior through a local mechanism where the instantaneous position of each body in the system is extrapolated from the past state. That past state is "inferred" from the space curvature around the object.
By analogy, we might think that the EPR source "infers" the past state of the detectors from the EM field around it, extrapolates the future detector orientation and generates a pair of entangled particles with a suitable spin.

DrChinese
Gold Member
DrC said: If there is a wavefunction and it collapses why have we never noticed wavefunction collapse in an experiment? There is not a single shred of evidence, direct, or indirect, that a wavefunction physically exists or wavefunction collapse occurs.

This is a fallacious argument. Just because something something is unpredictable does not make it non-deterministic. This is true even in the classical example of a ball at the top of a hill.
Not true. If entangled particles are spin correlated due to a prior state of the system, why aren't ALL particles similarly correlated? We should see correlations of spin observables everywhere we look! But we don't, we see randomness everywhere else. So the ONLY time we see these are with entangled particles. Hmmm. Gee, is this a strained explanation or what? And gosh, the actual experimental correlation just happens to match QM, while there is absolutely no reason (with this hypothesis) it couldn't have ANY value (how about sin^2 theta, for example). Why is that?

It is sorta like invoking the phases of the moon to explain why there are more murders during a full moon, and not being willing to accept that there are no fewer murders at other times. Or do we use this as an explanation only when it suits us?

If this isn't ad hoc science, what is?

Not true. If entangled particles are spin correlated due to a prior state of the system, why aren't ALL particles similarly correlated? We should see correlations of spin observables everywhere we look! But we don't, we see randomness everywhere else. So the ONLY time we see these are with entangled particles. Hmmm. Gee, is this a strained explanation or what?
That's simple. Even if each particle "looks" for a suitable detector orientation before emission, only for entangled particles we have a set of supplementary conditions (conservation of angular momentum, same emission time) that enable us to observe the correlations. In order to release a pair of entangled particles both detectors must be in a suitable state, that's not the case for a "normal" particle.

And gosh, the actual experimental correlation just happens to match QM, while there is absolutely no reason (with this hypothesis) it couldn't have ANY value (how about sin^2 theta, for example). Why is that?
I put Malus' law by hand without any particular reason other than reproduce QM's prediction. I'm getting tired of pointing out that the burden of proof is on you. You make the strong claim that no local-realistic mechanism can reproduce QM's prediction. On the other hand I don't claim that my hypothesis is true or even likely. I only claim that it is possible.

To give you an example, von Newman's proof against the existence of hidden-variable theories is wrong even if no such theory is provided. It was wrong even before Bohm published his interpretation and will remain wrong even if BM is falsified. So, asking me to provide evidence for the local-realistic mechanism I propose is a red-herring.

If this isn't ad hoc science, what is?
It certainly is ad-hoc but so what? Your bold claim regarding Bell's theorem is still proven false.

DrChinese
Gold Member
1. In order to release a pair of entangled particles both detectors must be in a suitable state, that's not the case for a "normal" particle.

2. I put Malus' law by hand without any particular reason other than reproduce QM's prediction.

3. Your bold claim regarding Bell's theorem is still proven false.
1. :rofl:

2. :rofl:

3. Still agreed to by virtually every scientist in the field.

JesseM
The question of what is or is not a valid loophole in Bell's theorem should not be a matter of opinion, and it also should not be affected by how ridiculous or implausible a theory based on the loophole would have to be. For example, everyone agrees the "conspiracy in initial conditions" is a logically valid loophole, even though virtually everyone also agrees that it's not worth taking seriously as a real possibility. If it wasn't for the light cone objection, I'd say that ueit had pointed out another valid loophole, even though I personally wouldn't take it seriously because of the separate objection of the need for ridiculously complicated laws of physics to "extrapolate" the future states of nonlinear systems with a huge number of interacting parts like the human brain. But I do think the light cone objection shows that ueit's idea doesn't work even as a logical possibility. If he wanted to argue that each particle has, in effect, not just a record of everything in its past light cone, but a record of the state of the entire universe immediately after the Big Bang (or at the singularity, if you imagine the singularity itself has 'hidden variables' which determine future states of the universe), then this would be a logically valid loophole, although I would see it as just a version of the "conspiracy" loophole (since each particle's 'record' of the entire universe's past state can't really be explained dynamically, it would seem to be part of the initial conditions).

Your logic here is faulty--even if the observable universe had reached thermal equilibrium, that definitely doesn't mean that each particle's past light cone would become identical at some early time. This is easier to see if we consider a situation of a local region reaching equilibrium in SR. Suppose at some time t0 we fill a box many light-years long with an inhomogenous distribution of gas, and immediately seal the box. We pick a particular region which is small compared to the entire box--say, a region 1 light-second wide--and wait just long enough for this region to get very close to thermal equilibrium. The box is much larger than the region so this will not have been long enough for the whole thing to reach equilibrium, so perhaps there will be large-scale gradients in density/pressure/temperature etc., even if any given region 1 light-second wide is very close to homogenous.

So, does this mean that if we take two spacelike-separated events inside the region which happen after it has reached equilibrium, we can predict one by knowing the complete light cone of the other? Of course not--this scenario is based entirely on the flat spacetime of SR, so it's easy to see that for any spacelike-separated events in SR, there must be events in the past light cone of one which lie outside the past light cone of the other, no matter how far back in time you go. In fact, as measured in the inertial frame where the events are simultaneous, the distance between the two events must be identical to the distance between the edges of the two past light cones at all earlier times. Also, if we've left enough time for the 1 light-second region to reach equilibrium, this will probably be a lot longer than 1 second, meaning the size of each event's past light cone at t0 will be much larger than the 1 light-second region itself.

The situation is a little more complicated in GR due to curved spacetime distorting the light cones (look at some of the diagrams on Ned Wright's Cosmology Tutorial, for example), but I'm confident you wouldn't see two light cones smoothly join up and encompass identical regions at earlier times--it seems to me this would imply at at the event of the joining-up, this would mean photons at the same position and moving in the same direction would have more than one possible geodesic path (leading either to the first event or the second event), which isn't supposed to be possible. In any case, your argument didn't depend specifically on any features of GR, it just suggested that if the universe had reached equilibrium this would mean that knowing the past light cone of one event in the region would allow a Laplacian demon to predict the outcome of another spacelike-separated event, but my SR example shows this doesn't make sense.
OK, I think I understand your point. The CMB isotropy does not require the whole early universe to be in thermal equilibrium. But, does the evidence we have require the opposite, that the whole universe was not in equilibrium? If not, my hypothesis is still consistent with extant data.

I wasn't saying anything about the big bang being stochastic, just about the initial singularity in GR being fairly "featurless", you can't extrapolate the later state of the universe from some sort of description of the singularity itself--this doesn't really mean GR is non-deterministic, you could just consider the singularity to not be a part of the spacetime manifold, but more like a point-sized "hole" in it. Of course GR's prediction of a "singularity" may be wrong, but in that case the past light cones of different events wouldn't converge on a single point of zero volume in the same way, so as long as we assume the new theory still has a light cone structure, we're back to my old argument about the past light cones of spacelike-separated events never becoming identical.
I don't think your argument applies in this case. For example, the pre-big bang universe might have been a Planck-sized "molecule" of an exotic type, that produced all particles in a deterministic manner.

Probably, but that doesn't imply that GR predicts that the Earth will be attracted to Mars' current position, since after all one can ignore Mars altogether in Newtonian gravity and still get a very good prediction of the movement of the Earth.
Forget about that example. Take Pluto's orbit or the Earth-Moon-Sun system. In both cases the acceleration felt by each object is non-uniform (the distance between Pluto and Sun ranges from 4.3 to 7.3 billion km, during a Sun eclipse the force acting on the Moon differs significantly from the case of a Moon eclipse). However, both systems are well described by Newtonian gravity hence the retardation effect is almost null. I think the main reason is that, quantitatively, the gravitational radiation is extremely small. The Wikipedia article you've linked says that Earth loses about 300 joules as gravitational radiation from a total of 2.7 x 10^33 joules.

If you really think it's plausible that GR predicts Earth can "extrapolate" the motions of Mars in this situation which obviously departs significantly from spherical/cylindrical symmetry, perhaps we should start a thread on the relativity forum to get confirmation from GR experts over there?
I'll do that.

You're right that I don't have a rigorous argument, but I'm just using the following intuition--if you know the current position of an object moving at constant velocity, how much calculation would it take to predict its future position under the assumption it continued to move at this velocity? How much calculation would it take to predict the future position of an object which we assume is undergoing uniform acceleration? And given a system involving many components with constantly-changing accelerations due to constant interactions with each other, like water molecules in a jar or molecules in a brain, how much calculation would it take to predict the future position of one of these parts given knowledge of the system's state in the past. Obviously the amount of calculation needed in the third situation is many orders of magnitude greater than in the first two.
If my analogy with gravity stands (all kinds of motions are well extrapolated in the small mass density regime), the difference in complexity should be about the same as between the Newtonian inverse square law and GR.

No. If our predictions don't "miserably fail" when we ignore Mars altogether, they aren't going to miserably fail if we predict the Earth is attracted to Mars' current position as opposed to where GR says it should be attracted to, which is not going to be very different anyway since a signal from Mars moving at the speed of light takes a maximum of 22 minutes to reach Earth according to this page. Again, in the situations where GR and Newtonian gravity give very different predictions, this is not mainly because of the retarded vs. current position issue.
See my other examples above.

JesseM,

I've started a new thread on "Special & General Relativity" forum named "General Relativity vs Newtonian Mechanics".

NateTG
Homework Helper
Have I missed something here? Aren't there solutions of the Schroedinger equation that predict accurately the orbitals of the hydrogen atom, the electron density of H2? Didn't solutions of the SE allow development of the scanning tunnelling EM? Aren't DeBroglie waves wavefunctions? Don't they accurately predict the various interferance patterns of electrons? I think the evidence is almost overwhelming.

Of course you could argue that all these phenomena are caused by some property that behaves exactly like the wavefunction, but a gentleman by the name of Occam had something to say about that.If it walks like a duck and quacks like a duck.......
Applying Occam's Razor to QM produces an 'instrumentalist interpretation' which is explicitly uninterested in anything untestable, and, instead simply predicts probabilities of experimental results. In other words, as long as there are prediction equivalent theories without a physically real wavefunction, Occam's razor tells us there isn't necessarily one.

NateTG
Homework Helper
But if it's really local, each particle should only be allowed to consult its own model of everything in its past light cone back to the Big Bang--for the particle to have a model of anything outside that would require either nonlocality or a "conspiracy" in initial conditions.
In a sense it's a 'small conspiracy' since something like Bohmian Mechanics allows a particle to maintain correlation without any special restriction on the initial conditions that insures correlation.

JesseM
In a sense it's a 'small conspiracy' since something like Bohmian Mechanics allows a particle to maintain correlation without any special restriction on the initial conditions that insures correlation.
Yes, but Bohmian mechanics is explicitly nonlocal, so it doesn't need special initial conditions for each particle to be "aware" of what every other particle is doing instantly. ueit is trying to propose a purely local theory.

NateTG
Homework Helper
Yes, but Bohmian mechanics is explicitly nonlocal, so it doesn't need special initial conditions for each particle to be "aware" of what every other particle is doing instantly. ueit is trying to propose a purely local theory.
BM is explicitly non-local because it requires synchronization. The 'instantaneous communication' aspect can be handled by anticipation since BM is deterministic.

Now, let's suppose that we can assign a synchronization value to each space-time in the universe which has the properties that:
(1) the synchronization value is a continuous function of space-time
(2) if space-time a is in the history of space-time b then the synchronization value of a is less than the synchronization value of b.

Now, we should be able to apply BM to 'sheets' of space-time with a fixed synchronization value rather than instants. Moreover for flat regions of space-time, these sheets should correspond to 'instants' so the predictions should align with experimental results.

Of course, it's not necessarily possible to have a suitable synchronization value. For example, in time-travel scenarios it's clearly impossible because of condition (2), but EPR isn't really a paradox in those scenarios either.

JesseM,

It seems I was wrong about the GR being capable to extrapolate generic accelerations. I thought that the very small energy lost by radiation would not have a detectable effect on planetary orbits. This is true, but there are other effects, like Mercury's perihelion advance.

You said:

Of course GR's prediction of a "singularity" may be wrong, but in that case the past light cones of different events wouldn't converge on a single point of zero volume in the same way, so as long as we assume the new theory still has a light cone structure, we're back to my old argument about the past light cones of spacelike-separated events never becoming identical.
We know that GR alone cannot describe the big-bang as it doesn't provide a mechanism by which the different particles are created. So, if the pre-big-bang universe was not a null-sized object, but with a structure of some sort, and if it existed long enough in that state for a light signal to travel along the whole thing, would this ensure identical past light cones?

Applying Occam's Razor to QM produces an 'instrumentalist interpretation' which is explicitly uninterested in anything untestable, and, instead simply predicts probabilities of experimental results. In other words, as long as there are prediction equivalent theories without a physically real wavefunction, Occam's razor tells us there isn't necessarily one.
I disagree. A wavefunction is a much simpler thing than the collection of all humans and their experiments. Occam would tell you to derive the latter from the former (as in MWI) rather than somehow taking it as given.

NateTG
Homework Helper
I disagree. A wavefunction is a much simpler thing than the collection of all humans and their experiments. Occam would tell you to derive the latter from the former (as in MWI) rather than somehow taking it as given.
Well, ultimately, it comes down to what 'simplest' means. And that requires some sort of arbitrary notions.

I don't know... it's easy to specify a wavefunction; you just write down some equations, and then without any complex further assumptions, you can talk about decoherence and so on to show that humans and their experiments are structures in the wavefunction. But how do you specify the collection of humans and their experiments, without deriving it from something more basic? I think any theory that's anthropocentric like that is bound to violate Occam.

NateTG
Homework Helper
We have one theory which says that:
1. We can predict experimental results using some method X
2. There are things that are not observable used in X.
3. These unobservable things have physical reality.
And another theory that says:
1. We can predict experimental results using the same method X.
2. There are things that are not observable used in X

Even considering that 'physical reality' is a poorly defined notion, it seems like the latter theory is simpler.

The crucial difference here being that in the former theory, 1 is explained by 2 and 3, whereas in the latter theory, 1 is an assumption that comes from nowhere. Occam is bothered by complex assumptions, not complex conclusions. Once you've explained something, you can cross it off your list of baggage.

Also, the latter theory isn't complete; either the unobservable things exist or they don't, and you have to pick one.

Last edited:
1. We can predict experimental results using QED.
2. The 4-potential $$A^{\mu}$$ is unobservable.

Surely we don't have to make a choice, but rely on experiment ?

Last edited:
I disagree. A wavefunction is a much simpler thing than the collection of all humans and their experiments. Occam would tell you to derive the latter from the former (as in MWI) rather than somehow taking it as given.
Actually, it is quite possible that you can do without a wavefunction (I guess Occam would be happy) In http://www.arxiv.org/abs/quant-ph/0509044 , the Klein-Gordon-Maxwell electrodynamics is discussed, the unitary gauge is chosen (meaning the wavefunction is real), and it is proven that one can eliminate the wavefunction from the equations and formulate the Cauchy problem for the 4-potential of electromagnetic field. That means that if you know the 4-potential and its time derivatives at some moment in time, you can calculate them for any moment in time, or, in other words, the 4-potential evolves independently.