OK Corral: Local versus non-local QM

  • Thread starter Thread starter wm
  • Start date Start date
  • Tags Tags
    Local Qm
  • #201
ueit said:
There is nothing to prove. Bell himself clearly stated that the theorem depends of the assumption of statistical independence. [DrChinese seems] not to be able to accept this, for a reason I can't understand.

Among other reasons, people find strong determinism unpalatable because it is not useful for producing testable theories. Of course, the same is true for MWI, which people seem to have much less trouble with.

Although it's not part of Bell's Theorem, the assumption that pairs of entangled particles can be space-like separated is untested (and possibly untestable), but necessary for valid Bell experiments.
 
Physics news on Phys.org
  • #202
NateTG said:
Among other reasons, people find strong determinism unpalatable because it is not useful for producing testable theories. Of course, the same is true for MWI, which people seem to have much less trouble with.
What does the assumption of statistical independence between spacelike separated events have to do with strong determinism? This statistical independence would be expected even in a completely deterministic universe, unless some past event that influenced both later events caused the correlation (but I explained in my last post to ueit why this doesn't seem to work if one event is that of a human brain making a choice, or any other event with sensitive dependence on initial conditions).
NateTG said:
Although it's not part of Bell's Theorem, the assumption that pairs of entangled particles can be space-like separated is untested (and possibly untestable), but necessary for valid Bell experiments.
What do you mean by "the assumption that pairs of entangled partices can be space-like separated"? Spacelike separation only applies to events, not particles with extended worldlines. And there's no disputing that the event of the experimenter choosing a detector setting and the source emitting the particles can be spacelike separated, all you have to do is find the coordinates of each event and verify that the spatial distance between them is greater than c^2 times the time interval between them, that's all that "spacelike separated" means.
 
  • #203
ueit said:
My mechanism is as follows:

The PDC source generates photon pairs that obey Malus’ law (n = cos^2(alpha)), where:

n = the probability that the two photons have the same spin on the two measurement axes.

alpha = angle between the polarizers.

The detectors' settings are not communicated non-locally but are "extrapolated" from the past state of the system.

This is absurd, and has been already ruled out by experiment (Aspect and many subsequent variations).

Detector orientations were changed mid-flight so it is too late for them to in any way be related to the state of the system at the time the entangled photons were created. You are certainly free to reject generally accepted science, but you should not expect others to follow suit without a better argument than that. Your hypothesis is akin to "intelligent design" arguments: there is no evidence to support your viewpoint - and all evidence that should be there is completely missing.
 
  • #204
DrChinese said:
This is absurd, and has been already ruled out by experiment (Aspect and many subsequent variations).

Detector orientations were changed mid-flight so it is too late for them to in any way be related to the state of the system at the time the entangled photons were created. You are certainly free to reject generally accepted science, but you should not expect others to follow suit without a better argument than that. Your hypothesis is akin to "intelligent design" arguments: there is no evidence to support your viewpoint - and all evidence that should be there is completely missing.
What ueit is proposing is that the source is able to predict the actions of the experimenters in advance, including anything they do while the photons are in mid-flight--this is basically similar to the retrocausality loophole in Bell's theorem (ueit bases his idea on a half-baked analogy with the way the electromagnetic and gravitational forces allow objects to "extrapolate" certain limited kinds of motions of other objects--see this article from John Baez's site). But aside from this requiring absurdly complicated laws of physics, I've already pointed out to ueit that it won't work without "conspiracies" in initial conditions, because there are events in the past light cone of the experimenter's choice of detector setting that are outside the past light cone of the source's emitting the particles, so that even assuming perfect determinism, a Laplacian demon sitting at the source would not be able to predict the experimenter's choice given knowledge of everything in the past light cone of the emission-event, all the way back to the Big Bang.
 
Last edited:
  • #205
JesseM said:
What ueit is proposing is that the source is able to predict the actions of the experimenters in advance, including anything they do while the photons are in mid-flight--this is basically similar to the retrocausality loophole in Bell's theorem (ueit bases his idea on a half-baked analogy with the way the electromagnetic and gravitational forces allow objects to "extrapolate" certain limited kinds of motions of other objects--see this article from John Baez's site). But aside from this requiring absurdly complicated laws of physics, I've already pointed out to ueit that it won't work without "conspiracies" in initial conditions, because there are events in the past light cone of the experimenter's choice of detector setting that are outside the past light cone of the source's emitting the particles, so that even assuming perfect determinism, a Laplacian demon sitting at the source would not be able to predict the experimenter's choice given knowledge of everything in the past light cone of the emission-event, all the way back to the Big Bang.

Yes, I quite agree with you. I thought you raised several good points about the entire concept (for instance, the idea that there are many light cones which start to come into play, not just the common one).

But ueit's idea is STILL absurd because it is not really science. There is not the slightest bit of evidence that the polarizer settings have any causal connection to the creation of the entangled particles - nor does that of any other prior event (or set of events) whatsoever. You may as well say that God wants to trick us about the cos^2 theta relationship (which is your "conspiracy" concept) as this relationship evolves regardless of whether the polarizers are set randomly by computer or by human hand.

The fact is, QM specifies the cos^2 theta relationship while ueit's "hypothesis" actually does not. The same hypothesis also fails to accurately predict a single physical law or any known phenomena whatsoever. Additionally, there is no known mechanism by which such causality can be transmitted. Ergo, I personally do not believe it qualifies for discussion on this board.
 
  • #206
What do you mean by "the assumption that pairs of entangled partices can be space-like separated"? Spacelike separation only applies to events, not particles with extended worldlines. And there's no disputing that the event of the experimenter choosing a detector setting and the source emitting the particles can be spacelike separated.

It's a bit silly, but suppose, for a moment, that whenever two particles are entangled, there is a very tiny wormhole connecting them so that although these two particles appear to be seperated, they are both really aspects of a single particle and local . Of course, these wormholes would have to have some rather odd properties, but, at that point it's impossible to have spacelike separation between the measurements.
 
  • #207
NateTG said:
It's a bit silly, but suppose, ...tiny wormhole connecting them so that although these two particles appear to be seperated, they are both really aspects of a single particle and local .
If true Bell is still correct, the "and local" you refer to is not "Bell Local". Your discribing a reality that is still not local and realistic, just as a theory like BM can use guide waves or MWI can use extra dimensions to create a their own version of "local". But not "local and realistic" in the Classical meaning. Which is all Bell is disigned to test for. The definition of QM is Bell non-local.
 
  • #208
RandallB said:
If true Bell is still correct, the "and local" you refer to is not "Bell Local". Your discribing a reality that is still not local and realistic, just as a theory like BM can use guide waves or MWI can use extra dimensions to create a their own version of "local". But not "local and realistic" in the Classical meaning. Which is all Bell is disigned to test for.

Just to be clear, this an attempt to illustrate an 'unstated assumption' of Bell's Theorem and not an attempt to refute it, QM, or any experimental results.

Although the presence of wormholes can cause some problems with causality, even Einstein knew that they were consistent with GR (and are thus local). AFAICT It's a bit ambiguous whether this qualifies as Bell-local because people generally assume that pairs of entangled particles can be space-like separated.

Now, if for every pair of particles A and B there is a zero length wormhole W between them, then the order of measurements on A or B is no longer dependent on the observer's frame of reference. Instead the measurement of A and the measurement of B can be considered to occur twice for any particular frame of reference (although each is only observable once). Since the 'first measurement' is well-defined, it's trivial to assign values to the particles at that point.
 
Last edited:
  • #209
Demystifier said:
Let me make a comment on the Clifford-valued local realistic variables.

Although I have not completely understood the paper, it is not a surprise to me that local Clifford-valued realistic variables may simulate QM. This is because, in a sense, non-commuting variables are never truly local, even if they are local formally. Let me explain what I mean by this:

A formally local quantity is a quantity of the form A(x) or B(y), where x and y are positions of the first and the second particle, respectively. Now, if they are not commuting, then
A(x)B(y) \neq B(y)A(x)

But how two quantities A and B know that they should not commute if x is very far from y? This knowledge is a sort of nonlocality as well.

My opinion is that realistic variables (local or not) must be not only commuting, but represented by real numbers. This is because they are supposed to be measurable, while a measurable quantity must be a real number. Therefore, I believe that the Clifford-valued realistic variables are physically meaningless.

In fact, the claim that physical variables could be noncommuting numbers does not differ much from the claim that physical variables could be noncommuting operators or noncommuting matrices. But this is exactly what the realistic physical variables in QM are NOT supposed to be, because otherwise we deal with QM in the usual matrix/operator form.

A paper has been added to the archive that discusses some of this in greater depth and may be of interest:

Title: Non-Viability of a Counter-Argument to Bell's Theorem
 
  • #210
Unless I'm misunderstanding something, that doesn't mean that there was any time when all the events in the past light cone of the event of the experimenter making a choice of what to measure were also in the past light cone of the event of the the source sending out the particles. Again, if you don't place any special constraints on initial conditions, then even in a deterministic universe, a Laplacian demon with knowledge of everything in the past light cone of the source sending out the particles would not necessarily be able to predict the brain state of the experimenter at the time he made his choice of what to measure. Do you disagree?

I think that inflationary theory would say that the past light-cones of the most widely-separated events we can see will partially overlap, so that the similarity of the CMBR in different regions can have a common past cause. But again, it doesn't mean that knowing the past light cone of one event would allow you to predict every other event, even in a perfectly deterministic universe, because any pair of spacelike separated events would have parts of their past light cones that are outside the past light cone of the other event. (This is assuming you don't try to define the past light cone of each event at the exact time of the initial singularity itself, since the singularity doesn't seem to have a state that could allow you to extrapolate later events by knowing it...for every time slice after the singularity, though, knowing the complete physical state of a region of space would allow you to predict any future event whose past light cone lies entirely in that region, in a deterministic universe.)

Juao Magueijo’s article “Plan B for the cosmos” (Scientific American, Jan. 2001, p.47) reads:

Inflationary theory postulates that the early universe expanded so fast that the range of light was phenomenally large. Seemingly disjointed regions could thus have communicated with one another and reached a common temperature and density. When the inflationary expansion ended, these regions began to fall out of touch.

It does not take much thought to realize that the same thing could have been achieved if light simply had traveled faster in the early universe than it does today. Fast light could have stitched together a patchwork of otherwise disconnected regions. These regions could have homogenized themselves. As the speed of light slowed, those regions would have fallen out of contact

It is clear from the above quote that the early universe was in thermal equilibrium. That means that there was enough time for the EM field of each particle to reach all other particles (it only takes light one second to travel between two opposite points on a sphere with a diameter of 3 x 10^8 m but this time is hardly enough to bring such a sphere of gas at an almost perfect thermal equilibrium). A Laplacian demon “riding on a particle” could infer the position/momentum of every other particle in that early universe by looking at the field around him. This is still true today because of the extrapolation mechanism.

I also disagree that “the singularity doesn't seem to have a state that could allow you to extrapolate later events by knowing it”. We don’t have a theory to describe the big-bang so I don’t see why we should assume that it was a non-deterministic phenomena rather than a deterministic one. If QM is deterministic after all I don’t see where a stochastic big-bang could come from.

To summarize, my questions are:

1. Do we have compelling evidence that the big-bang was a non-deterministic process?
2. Does the present evidence exclude the possibility that “past light-cones of the most widely-separated events” overlap completely?

If the answer to any of the these questions is “no” my hypothesis stands.

I was asking if you were sure about your claim that in the situation where Mars was deflected by a passing body, the Earth would continue to feel a gravitational pull towards Mars' present position rather than its retarded position, throughout the process.

Yes, because this is a case where Newtonian theory applies well (small mass density). I’m not accustomed with GR formalism but I bet that the difference between the predictions of the two theories is very small.

This is a question about GR that would presumably have a single correct answer, so I'm not sure what you mean by "many possible scenarios"--perhaps you misunderstood what I was asking.

Yes, I misunderstood you.

It only works as an approximation. If you're claiming that it works in the specific sense of objects continuing to be pulled towards other object's present positions rather than retarded positions, I believe you're wrong about that--again, the "extrapolation" only happens in the case of constant velocity or spherically/cylindrically symmetric motion AFAIK.

In Newtonian gravity the force is instantaneous. So, yes, in any system for which Newtonian gravity is a good approximation the objects are “pulled towards other object's present positions”. The article you linked from John Baez’s site claims that uniform accelerated motion is extrapolated by GR as well.

By "complexity" I was referring to the mathematical complexity of the laws involved. We could say that in electromagnetism a charged particle "knows" where another particle would be now if it kept moving at constant velocity, and in GR a test particle "knows" where the surface of a collapsing shell would be if it maintains spherical symmetry; there isn't a literal calculation of this of course, but the laws are such that the particles act as if they know in terms of what direction they are pulled. In order for the source to act as though it knows the orientation of a distant polarizer which was fixed by the brain of a human experimenter, then even if we ignore the issue of some events in the past light cone of the experimenter's choice being outside the past light cone of the source emitting the particles, the "extrapolation" here would be far more complicated because of the extremely complicated and non-symmetrical motions of all the mutually interacting particles in the experimenter's brain which must be extrapolated from some past state, and presumably the laws that would make the source act this way would not have anything like the simplicity of electromagnetism or GR. We could think in terms of algorithmic complexity, for example--the local rules in a cellular-automata program simulating EM or GR would not require a hugely long program (although the actual calculations for a large number of 'cells' might require a lot of computing power), while it seems to me that the sort of rules you're imagining would involve a much, much longer program just to state the fundamental local rules.

EM extrapolates uniform motion, GR uniform accelerated motion. I’m not a mathematician so I have no idea if a mechanism able to extrapolate a generic accelerated motion should necessarily be as complex or so difficult to simulate on a computer as you imply. You are, of course, free to express an opinion but at this point I don’t think you’ve put forward a compelling argument.

You refer to "imperfect" extrapolation, but I'm pretty sure it's not as if GR can kinda-sorta extrapolate accelerations that aren't perfectly spherically or cylindrically symmetric, it's an all-or-nothing deal, just like with EM where the extrapolation is to where the other particle would be if it kept moving at an exactly constant velocity, not somewhere between a constant velocity and its true acceleration. GR wouldn't in any way begin to extrapolate the current positions of particles which are accelerating in all sorts of different directions in a non-symmetric way, with the direction and magnitude of each particle's acceleration always changing due to interactions with other particles (like all the different molecules and electrons in your brain).

If what you are saying is true then we should expect Newtonian gravity to miserably fail when dealing with a non-uniform accelerated motion, like a planet in an elliptical orbit, right? Anyway, probably you are right that an imperfect extrapolation would be useless because of chaos, so a mechanism able to perfectly extrapolate accelerated motion is required.

And of course, even if you set things up so the detector angle was determined by some simple mechanism which GR could extrapolate, like the radius of a collapsing star at the moment the source emits its particles, the "extrapolation" just refers to where other objects will experience a gravitational pull, what sort of laws do you propose that would allow the source to "know" that the detector angle depends on this variable, and to modify the hidden variables based on the detector angles? Obviously there's nothing in GR itself that could do this.

I have no idea of how the mathematical implementation of such a mechanism would look like. Probably one could start with the Cramer’s transactional interpretation and replace the advanced wave that is send by the absorber back in time towards the emitter with a “normal”, retarded wave coming from the detector prior to emission and make the emission event depend on the “extrapolated” position of the absorber.

See above--like I said, this doesn't mean that knowing the past light cone of one event would allow you to automatically predict the outcome of another event with a spacelike separation from the first. The regions of the two past light cones will overlap in the very early universe, but there will be no finite moment after the singularity where the regions encompassed by the two past light cones at that moment are identical, there will always be some points in the past light cone of one that are outside the past light cone of the other. If the event we're talking about is the product of a nonlinear system exhibiting sensitive dependence on initial conditions like the brain, then it seems to me that even in a deterministic universe you'd need to know the complete state of the region of space inside the past light cone at an earlier time in order to predict the event. This is why I think that even Laplace's demon could not predict what the detector setting would be if he only knew about events in the past light cone of the source emitting the entangled particles. Do you disagree, and if so, why?

I disagree, see my two questions above.
 
  • #211
DrChinese said:
But ueit's idea is STILL absurd because it is not really science. There is not the slightest bit of evidence that the polarizer settings have any causal connection to the creation of the entangled particles - nor does that of any other prior event (or set of events) whatsoever.

Please specify what evidence supports the idea of a non-realistic universe.
Please specify what evidence supports the idea of a non-local universe.

The fact is, QM specifies the cos^2 theta relationship while ueit's "hypothesis" actually does not. The same hypothesis also fails to accurately predict a single physical law or any known phenomena whatsoever. Additionally, there is no known mechanism by which such causality can be transmitted. Ergo, I personally do not believe it qualifies for discussion on this board.

You shift the burden of proof. Again.
 
  • #212
ueit said:
A Laplacian demon “riding on a particle” could infer the position/momentum of every other particle in that early universe by looking at the field around him. This is still true today because of the extrapolation mechanism.

Another exaggeration. This is not science! If you are such a demon with amazing god-like powers, please show how you might do this.

Otherwise, you should stick to accepted theory & experiment. Obviously, your hypothesis is FAR from accepted as there are no known field effects capable of providing this amount of information. And the Heisenberg Uncertainty Principle flat out excludes this hypothesis for even ONE particle.
 
  • #213
ueit said:
You shift the burden of proof. Again.

There is a big difference between your position and mine, and that allows me to do this successfully.

Your ad hoc opinion belongs in Theory Development, where you can attempt to develop it into a testable scientific hypothesis. On the other hand, my position is orthodox science with the backing of both theory and experiment.

Please quit telling readers here that entangled particle wave functions were predetermined from the initial pre-inflationary state of the universe unless you have some specific evidence of that.
 
  • #214
Just as a reminder, the old "Theory Development" forum was superseded some time ago by the "Independent Research" forum.
 
  • #215
jtbell said:
Just as a reminder, the old "Theory Development" forum was superseded some time ago by the "Independent Research" forum.

I stand corrected. :smile:

I would say that ueit has had a run with this idea; it has been fairly discussed; forum members (besides myself) have addressed the substantial weaknesses in the concept; and ueit has failed to provide any substantiation for claims that are getting bolder and bolder.

Given PF guidelines, I think it is time for ueit to move on regarding this ad hoc and untestable hypothesis.
 
  • #216
DrChinese said:
Please quit telling readers here that entangled particle wave functions were predetermined from the initial pre-inflationary state of the universe unless you have some specific evidence of that.

We are not so easily influenced. But I believe, if wavefunctions exist, and the big bang happened ( and other ifs too) that ALL wavefunctions are entangled - but not in a conspiratorial way. So that kind of entanglement is irrelevant to the Bell scenario.
 
  • #217
DrChinese said:
ueit said:
You shift the burden of proof. Again.

There is a big difference between your position and mine, and that allows me to do this successfully.

Your ad hoc opinion belongs in Theory Development, where you can attempt to develop it into a testable scientific hypothesis. On the other hand, my position is orthodox science with the backing of both theory and experiment.

Please quit telling readers here that entangled particle wave functions were predetermined from the initial pre-inflationary state of the universe unless you have some specific evidence of that.

This is a discussion about interpretations of QM. They're called interpretations, rather than theories, because they do not make testable predictions.

It's unreasonable to require experimental evidence to validate an interpretation. As a theory, QM is agnostic regarding determinism, so QM makes no predictions that support or contradict the notion that the current state of the universe is completely determined by a prior state. (Technically, one could stay that QM is a deterministic theory in the sense that it can model a deterministic reality.)

There are prediction-identical strongly deterministic interpretations of QM. Many worlds and Bohmian mechanics are two well-known examples - neither of which is at odds with orthodox science. Any experimental result that falsifies the notion of strong determinism would also invalidate both of those interpretations.

There are (as previously mentioned) philosophical and aesthetic reasons for refusing to accept arbitrary strong determinism. Arbitrary strong determinism is not predictive, and could easily be described as a 'conspiring universe'. However, since since strong determinism is not falsifiable, it cannot be contradicted by scientific experiments.

To be clear, it is categorically impossible to experimentally contradict the notion that wave functions are predetermined.

N.B.: Talking (or posting) about things in terms of 'your position' or 'my position' can be useful and convenient, but can also foster confusion or lead to people taking things personally.
 
  • #218
JesseM said:
NateTG said:
Among other reasons, people find strong determinism unpalatable because it is not useful for producing testable theories. Of course, the same is true for MWI, which people seem to have much less trouble with.
What does the assumption of statistical independence between spacelike separated events have to do with strong determinism? This statistical independence would be expected even in a completely deterministic universe, unless some past event that influenced both later events caused the correlation (but I explained in my last post to ueit why this doesn't seem to work if one event is that of a human brain making a choice, or any other event with sensitive dependence on initial conditions).

As far as I can tell, ueit is basically arguing an interpretation of arbitrary strong determinism, which is then made local by assuming that each particle consults it's own model of the entire universe. In effect, each particle carries some hidden state \vec{h} which corresponds to a complete list of the results of 'wavefunction collapses'.

What strong determinism has to do with statistical independence is that statistical independence may not be possible in a deterministic universe. As you have alluded to, by your reference to the human brain, the notion of statistical independence has also been called 'free will'.
 
  • #219
ueit said:
A Laplacian demon “riding on a particle” could infer the position/momentum of every other particle in that early universe by looking at the field around him. This is still true today because of the extrapolation mechanism.

NateTG said:
As far as I can tell, ueit is basically arguing an interpretation of arbitrary strong determinism, which is then made local by assuming that each particle consults it's own model of the entire universe. In effect, each particle carries some hidden state \vec{h} which corresponds to a complete list of the results of 'wavefunction collapses'.

Listen to what is being said! It is absolutely as extreme as arguing that "Jesus said so" is an interpretation of physics that we need to discuss. Yes, there would need to be a local copy of the history (past and present, and presumably future as well) of the entire universe present in every particle. Yet ueit even acknowledged this as absurd earlier (but has apparently returned to it).

I like my interpretations with at least a modicum of science included. :smile: As you can tell, I am no fan of ad hoc theories that make NO specific predictions (testable or otherwise). Clearly there is not one IOTA of connection between this "interpretation" and the results we actually observe since "superdeterminism" makes no predictive results whatsoever other than "anything goes".

As regards this thread specifically: Where does the cos^2 theta relationship come from? Bohmian Mechanics has a framework, as does QM. So please, don't elevate ueit's ideas from ad hoc personal philosophy to the level of legitimate science.
 
  • #220
NateTG said:
It's unreasonable to require experimental evidence to validate an interpretation.

NateTG, I'll disagree with you on this one. We should expect an physical interpretation to pass muster with an existing substantial body evidence.

1. If there is a map of the entire history of the universe embedded in every particle (as ueit says), why have we never noticed this in any experiment? There is not a single shed of evidence, direct or indirect, that this is so.

2. And there is substantial evidence - in the form of random results to any desired level - that there is NO connection between quantum states of many collections of particles that are in extremely close causal contact. An example being a radioactive sample of uranium, or polarization of photons emitted from ordinary light bulbs. It is amazing that ONLY the entangled photons from a PDC source (or similar) display these fascinating correlations, while all other particles are purely random. Yet ueit says all share a "common" knowledge of the evolution of the universe.

So while these blatant gaps may not bother you, they point out to me that this is a purely ad hoc personal theory and one which does not qualify as a legitimate interpretation of particle physics. Or perhaps there are papers that could be cited to at least add some air of science to this?

I think that discussions of interpretations are themselves completely legitimate, but ueit is well beyond that point. I recognize that you may have a different opinion of where that point is.
 
  • #221
DrChinese said:
Listen to what is being said! It is absolutely as extreme as arguing that "Jesus said so" is an interpretation of physics that we need to discuss.

What I'm arguing is that it's incorrect to say that 'Jesus said so' can be (scientifically) falsified. 'Jesus said so' is an awful interpretation of QM, but the salient argument for that is philosophical and not scientific in nature. However this claim:
On the other hand, my position is orthodox science with the backing of both theory and experiment.
represents itself as a scientific one.

Yes, there would need to be a local copy of the history (past and present, and presumably future as well) of the entire universe present in every particle. Yet ueit even acknowledged this as absurd earlier (but has apparently returned to it).

Philosophically speaking, nobody seems to have any trouble with every particle having a local copy of its own past. It's not difficult to conceive of a universe where every particle's past includes sufficient information to predict the entire universe's space-time.

As regards this thread specifically: Where does the cos^2 theta relationship come from? Bohmian Mechanics has a framework, as does QM. So please, don't elevate ueit's ideas from ad hoc personal philosophy to the level of legitimate science.

I don't want to put words into ueit's mouth, but what he seems to be trying to describe is very similar to Bohmian Mechanics in a universe with a singular origin, which is local, realistic, and prediction equivalent to QM.
 
  • #222
1. If there is a map of the entire history of the universe embedded in every particle (as ueit says), why have we never noticed this in any experiment? There is not a single shed of evidence, direct or indirect, that this is so.

If there is a wavefunction and it collapses why have we never noticed wavefunction collapse in an experiment? There is not a single shred of evidence, direct, or indirect, that a wavefunction physically exists or wavefunction collapse occurs.

2. And there is substantial evidence - in the form of random results to any desired level - that there is NO connection between quantum states of many collections of particles that are in extremely close causal contact. An example being a radioactive sample of uranium, or polarization of photons emitted from ordinary light bulbs. It is amazing that ONLY the entangled photons from a PDC source (or similar) display these fascinating correlations, while all other particles are purely random. Yet ueit says all share a "common" knowledge of the evolution of the universe.

This is a fallacious argument. Just because something something is unpredictable does not make it non-deterministic. This is true even in the classical example of a ball at the top of a hill.
 
  • #223
NateTG said:
Just to be clear, this an attempt to illustrate an 'unstated assumption' of Bell's Theorem and not an attempt to refute it, QM, or any experimental results.

Although the presence of wormholes can cause some problems with causality, even Einstein knew that they were consistent with GR (and are thus local). AFAICT It's a bit ambiguous whether this qualifies as Bell-local because people generally assume that pairs of entangled particles can be space-like separated.
[emphasis added]

Sorry I’ve totally missed any 'unstated assumption' by Bell that should be included in his theorem. Do you have a clear definition of what you’re referring to here?

Also I don’t see what you think is “a bit ambiguous” - - - are you trying to say it could be fair to call a pair of wormholes Bell-local?? If you understand Bell-local there is nothing ambiguous at all; the idea is just not Bell Local any more than QM is Bell Local!

There is nothing wrong with being non Bell-local that is how QM defines itself as a “complete” theory (no other can improve on) and a Local Realist cannot be complete. Remember Bell-local means BOTH Einstein local and Einstein realistic i.e. Einstein local and realistic. That means local variables created as part of the two separate photons when they were created including hidden variables. But that is not all; these variables do not change with any “Weird Action at’a Distance” (WAAD) of any kind! Any such requirement is NOT Bell-local.

All the Bell experiments have said is “Sorry Bell we still see weird action at a distance that cannot be answered by any Local Realistic Variable, not even a hidden but unknown one.”

Wormholes theories and "common histories able to manipulate and correlate outcomes" theories can solve Bell is because they are non Bell-local.

Just like MWI and Bohmian Mechanics are not Bell-local, they all allow WAAD to appear, they just have different ways to account for it - but it is still WAAD that a Local Realist cannot understand without extra dimensions of invisible guide waves. They provide predictions equivalent to QM because they are using non Bell-local solutions,

If they are Bell-local theories they could be provide a definition of Einstein’s unknown hidden variable; not a solution to WAAD equivalent to QM non-local solution.

A theory can not just change the rules for what Bell-local means and expect to gain more respect than QM; if it wants to claim being local it must define and describe the variable that replaces the uncertainty principle.
 
  • #224
NateTG said:
Among other reasons, people find strong determinism unpalatable because it is not useful for producing testable theories. Of course, the same is true for MWI, which people seem to have much less trouble with.

Strong determinism is nothing but plain old determinism followed to its logical conclusion. If one doesn't like the conclusion, too bad for him. However, it should be clearly stated that the reason for rejecting such theories has nothing to do with Bell.
 
  • #225
DrChinese, please answer my two questions:

Please specify what evidence supports the idea of a non-realistic universe.
Please specify what evidence supports the idea of a non-local universe.

Thanks!
 
  • #226
NateTG said:
...There is not a single shred of evidence, direct, or indirect, that a wavefunction physically exists...

Have I missed something here? Aren't there solutions of the Schroedinger equation that predict accurately the orbitals of the hydrogen atom, the electron density of H2? Didn't solutions of the SE allow development of the scanning tunnelling EM? Aren't DeBroglie waves wavefunctions? Don't they accurately predict the various interferance patterns of electrons? I think the evidence is almost overwhelming.

Of course you could argue that all these phenomena are caused by some property that behaves exactly like the wavefunction, but a gentleman by the name of Occam had something to say about that.:biggrin:If it walks like a duck and quacks like a duck...
 
  • #227
ueit said:
Juao Magueijo’s article “Plan B for the cosmos” (Scientific American, Jan. 2001, p.47) reads:
Inflationary theory postulates that the early universe expanded so fast that the range of light was phenomenally large. Seemingly disjointed regions could thus have communicated with one another and reached a common temperature and density. When the inflationary expansion ended, these regions began to fall out of touch.

It does not take much thought to realize that the same thing could have been achieved if light simply had traveled faster in the early universe than it does today. Fast light could have stitched together a patchwork of otherwise disconnected regions. These regions could have homogenized themselves. As the speed of light slowed, those regions would have fallen out of contact
It is clear from the above quote that the early universe was in thermal equilibrium. That means that there was enough time for the EM field of each particle to reach all other particles (it only takes light one second to travel between two opposite points on a sphere with a diameter of 3 x 10^8 m but this time is hardly enough to bring such a sphere of gas at an almost perfect thermal equilibrium). A Laplacian demon “riding on a particle” could infer the position/momentum of every other particle in that early universe by looking at the field around him. This is still true today because of the extrapolation mechanism.
Your logic here is faulty--even if the observable universe had reached thermal equilibrium, that definitely doesn't mean that each particle's past light cone would become identical at some early time. This is easier to see if we consider a situation of a local region reaching equilibrium in SR. Suppose at some time t0 we fill a box many light-years long with an inhomogenous distribution of gas, and immediately seal the box. We pick a particular region which is small compared to the entire box--say, a region 1 light-second wide--and wait just long enough for this region to get very close to thermal equilibrium. The box is much larger than the region so this will not have been long enough for the whole thing to reach equilibrium, so perhaps there will be large-scale gradients in density/pressure/temperature etc., even if any given region 1 light-second wide is very close to homogenous.

So, does this mean that if we take two spacelike-separated events inside the region which happen after it has reached equilibrium, we can predict one by knowing the complete light cone of the other? Of course not--this scenario is based entirely on the flat spacetime of SR, so it's easy to see that for any spacelike-separated events in SR, there must be events in the past light cone of one which lie outside the past light cone of the other, no matter how far back in time you go. In fact, as measured in the inertial frame where the events are simultaneous, the distance between the two events must be identical to the distance between the edges of the two past light cones at all earlier times. Also, if we've left enough time for the 1 light-second region to reach equilibrium, this will probably be a lot longer than 1 second, meaning the size of each event's past light cone at t0 will be much larger than the 1 light-second region itself.

The situation is a little more complicated in GR due to curved spacetime distorting the light cones (look at some of the diagrams on Ned Wright's Cosmology Tutorial, for example), but I'm confident you wouldn't see two light cones smoothly join up and encompass identical regions at earlier times--it seems to me this would imply at at the event of the joining-up, this would mean photons at the same position and moving in the same direction would have more than one possible geodesic path (leading either to the first event or the second event), which isn't supposed to be possible. In any case, your argument didn't depend specifically on any features of GR, it just suggested that if the universe had reached equilibrium this would mean that knowing the past light cone of one event in the region would allow a Laplacian demon to predict the outcome of another spacelike-separated event, but my SR example shows this doesn't make sense.
ueit said:
I also disagree that “the singularity doesn't seem to have a state that could allow you to extrapolate later events by knowing it”. We don’t have a theory to describe the big-bang so I don’t see why we should assume that it was a non-deterministic phenomena rather than a deterministic one. If QM is deterministic after all I don’t see where a stochastic big-bang could come from.
I wasn't saying anything about the big bang being stochastic, just about the initial singularity in GR being fairly "featurless", you can't extrapolate the later state of the universe from some sort of description of the singularity itself--this doesn't really mean GR is non-deterministic, you could just consider the singularity to not be a part of the spacetime manifold, but more like a point-sized "hole" in it. Of course GR's prediction of a "singularity" may be wrong, but in that case the past light cones of different events wouldn't converge on a single point of zero volume in the same way, so as long as we assume the new theory still has a light cone structure, we're back to my old argument about the past light cones of spacelike-separated events never becoming identical.
JesseM said:
I was asking if you were sure about your claim that in the situation where Mars was deflected by a passing body, the Earth would continue to feel a gravitational pull towards Mars' present position rather than its retarded position, throughout the process.
ueit said:
Yes, because this is a case where Newtonian theory applies well (small mass density). I’m not accustomed with GR formalism but I bet that the difference between the predictions of the two theories is very small.
Probably, but that doesn't imply that GR predicts that the Earth will be attracted to Mars' current position, since after all one can ignore Mars altogether in Newtonian gravity and still get a very good prediction of the movement of the Earth. If you really think it's plausible that GR predicts Earth can "extrapolate" the motions of Mars in this situation which obviously departs significantly from spherical/cylindrical symmetry, perhaps we should start a thread on the relativity forum to get confirmation from GR experts over there?
ueit said:
In Newtonian gravity the force is instantaneous. So, yes, in any system for which Newtonian gravity is a good approximation the objects are “pulled towards other object's present positions”.
You're talking as though the only reason Newtonian gravity could fail to be a good approximation is because of the retarded vs. current position issue! But there are all kinds of ways in which GR departs wildly from Newtonian gravity which have nothing to do with this issue, like the prediction that sufficiently massive objects can form black holes, or the prediction of gravitational time dilation. And the fact is that the orbit of a given planet can be approximated well by ignoring the other planets altogether (or only including Jupiter), so obviously the issue of the Earth being attracted to the current vs. retarded position of Mars is going to have little effect on our predictions.
ueit said:
The article you linked from John Baez’s site claims that uniform accelerated motion is extrapolated by GR as well.
Well, the wikipedia article says:
In general terms, gravitational waves are radiated by objects whose motion involves acceleration, provided that the motion is not perfectly spherically symmetric (like a spinning, expanding or contracting sphere) or cylindrically symmetric (like a spinning disk).
So either one is wrong or we're misunderstanding what "uniform acceleration" means...is it possible that Baez was only talking about uniform acceleration caused by gravity as opposed to other forces, and that gravity only causes uniform acceleration in an orbit situation which also has spherical/cylindrical symmetry? I don't know the answer, this might be another question to ask on the relativity forum...in any case, I'm pretty sure that the situation you envisioned where Mars is deflected from its orbit by a passing body would not qualify as either "uniform acceleration" or "spherically/cylindrically symmetric".
ueit said:
EM extrapolates uniform motion, GR uniform accelerated motion. I’m not a mathematician so I have no idea if a mechanism able to extrapolate a generic accelerated motion should necessarily be as complex or so difficult to simulate on a computer as you imply. You are, of course, free to express an opinion but at this point I don’t think you’ve put forward a compelling argument.
You're right that I don't have a rigorous argument, but I'm just using the following intuition--if you know the current position of an object moving at constant velocity, how much calculation would it take to predict its future position under the assumption it continued to move at this velocity? How much calculation would it take to predict the future position of an object which we assume is undergoing uniform acceleration? And given a system involving many components with constantly-changing accelerations due to constant interactions with each other, like water molecules in a jar or molecules in a brain, how much calculation would it take to predict the future position of one of these parts given knowledge of the system's state in the past. Obviously the amount of calculation needed in the third situation is many orders of magnitude greater than in the first two.
ueit said:
If what you are saying is true then we should expect Newtonian gravity to miserably fail when dealing with a non-uniform accelerated motion, like a planet in an elliptical orbit, right?
No. If our predictions don't "miserably fail" when we ignore Mars altogether, they aren't going to miserably fail if we predict the Earth is attracted to Mars' current position as opposed to where GR says it should be attracted to, which is not going to be very different anyway since a signal from Mars moving at the speed of light takes a maximum of 22 minutes to reach Earth according to this page. Again, in the situations where GR and Newtonian gravity give very different predictions, this is not mainly because of the retarded vs. current position issue.
 
Last edited:
  • #228
NateTG said:
As far as I can tell, ueit is basically arguing an interpretation of arbitrary strong determinism, which is then made local by assuming that each particle consults it's own model of the entire universe.
But if it's really local, each particle should only be allowed to consult its own model of everything in its past light cone back to the Big Bang--for the particle to have a model of anything outside that would require either nonlocality or a "conspiracy" in initial conditions. As I argued in my last post, ueit's argument about thermal equilibrium in the early universe establishing that all past light cones merge and become identical at some point doesn't make sense.
 
  • #229
NateTG said:
As far as I can tell, ueit is basically arguing an interpretation of arbitrary strong determinism, which is then made local by assuming that each particle consults it's own model of the entire universe. In effect, each particle carries some hidden state \vec{h} which corresponds to a complete list of the results of 'wavefunction collapses'.

It's not exactly what I propose. Take the case of gravity in a Newtonian framework. Each object "knows" where all other objects are, instantaneously. It then acts as if it's doing all the calculations, applying the inverse square law. General relativity explains this apparently non-local behavior through a local mechanism where the instantaneous position of each body in the system is extrapolated from the past state. That past state is "inferred" from the space curvature around the object.
By analogy, we might think that the EPR source "infers" the past state of the detectors from the EM field around it, extrapolates the future detector orientation and generates a pair of entangled particles with a suitable spin.
 
  • #230
NateTG said:
DrC said: If there is a wavefunction and it collapses why have we never noticed wavefunction collapse in an experiment? There is not a single shred of evidence, direct, or indirect, that a wavefunction physically exists or wavefunction collapse occurs.

This is a fallacious argument. Just because something something is unpredictable does not make it non-deterministic. This is true even in the classical example of a ball at the top of a hill.

Not true. If entangled particles are spin correlated due to a prior state of the system, why aren't ALL particles similarly correlated? We should see correlations of spin observables everywhere we look! But we don't, we see randomness everywhere else. So the ONLY time we see these are with entangled particles. Hmmm. Gee, is this a strained explanation or what? And gosh, the actual experimental correlation just happens to match QM, while there is absolutely no reason (with this hypothesis) it couldn't have ANY value (how about sin^2 theta, for example). Why is that?

It is sort of like invoking the phases of the moon to explain why there are more murders during a full moon, and not being willing to accept that there are no fewer murders at other times. Or do we use this as an explanation only when it suits us?

If this isn't ad hoc science, what is?
 
  • #231
DrChinese said:
Not true. If entangled particles are spin correlated due to a prior state of the system, why aren't ALL particles similarly correlated? We should see correlations of spin observables everywhere we look! But we don't, we see randomness everywhere else. So the ONLY time we see these are with entangled particles. Hmmm. Gee, is this a strained explanation or what?

That's simple. Even if each particle "looks" for a suitable detector orientation before emission, only for entangled particles we have a set of supplementary conditions (conservation of angular momentum, same emission time) that enable us to observe the correlations. In order to release a pair of entangled particles both detectors must be in a suitable state, that's not the case for a "normal" particle.

And gosh, the actual experimental correlation just happens to match QM, while there is absolutely no reason (with this hypothesis) it couldn't have ANY value (how about sin^2 theta, for example). Why is that?

I put Malus' law by hand without any particular reason other than reproduce QM's prediction. I'm getting tired of pointing out that the burden of proof is on you. You make the strong claim that no local-realistic mechanism can reproduce QM's prediction. On the other hand I don't claim that my hypothesis is true or even likely. I only claim that it is possible.

To give you an example, von Newman's proof against the existence of hidden-variable theories is wrong even if no such theory is provided. It was wrong even before Bohm published his interpretation and will remain wrong even if BM is falsified. So, asking me to provide evidence for the local-realistic mechanism I propose is a red-herring.

If this isn't ad hoc science, what is?

It certainly is ad-hoc but so what? Your bold claim regarding Bell's theorem is still proven false.
 
  • #232
ueit said:
1. In order to release a pair of entangled particles both detectors must be in a suitable state, that's not the case for a "normal" particle.

2. I put Malus' law by hand without any particular reason other than reproduce QM's prediction.

3. Your bold claim regarding Bell's theorem is still proven false.

1. :smile:

2. :smile:

3. Still agreed to by virtually every scientist in the field.
 
  • #233
The question of what is or is not a valid loophole in Bell's theorem should not be a matter of opinion, and it also should not be affected by how ridiculous or implausible a theory based on the loophole would have to be. For example, everyone agrees the "conspiracy in initial conditions" is a logically valid loophole, even though virtually everyone also agrees that it's not worth taking seriously as a real possibility. If it wasn't for the light cone objection, I'd say that ueit had pointed out another valid loophole, even though I personally wouldn't take it seriously because of the separate objection of the need for ridiculously complicated laws of physics to "extrapolate" the future states of nonlinear systems with a huge number of interacting parts like the human brain. But I do think the light cone objection shows that ueit's idea doesn't work even as a logical possibility. If he wanted to argue that each particle has, in effect, not just a record of everything in its past light cone, but a record of the state of the entire universe immediately after the Big Bang (or at the singularity, if you imagine the singularity itself has 'hidden variables' which determine future states of the universe), then this would be a logically valid loophole, although I would see it as just a version of the "conspiracy" loophole (since each particle's 'record' of the entire universe's past state can't really be explained dynamically, it would seem to be part of the initial conditions).
 
  • #234
JesseM said:
Your logic here is faulty--even if the observable universe had reached thermal equilibrium, that definitely doesn't mean that each particle's past light cone would become identical at some early time. This is easier to see if we consider a situation of a local region reaching equilibrium in SR. Suppose at some time t0 we fill a box many light-years long with an inhomogenous distribution of gas, and immediately seal the box. We pick a particular region which is small compared to the entire box--say, a region 1 light-second wide--and wait just long enough for this region to get very close to thermal equilibrium. The box is much larger than the region so this will not have been long enough for the whole thing to reach equilibrium, so perhaps there will be large-scale gradients in density/pressure/temperature etc., even if any given region 1 light-second wide is very close to homogenous.

So, does this mean that if we take two spacelike-separated events inside the region which happen after it has reached equilibrium, we can predict one by knowing the complete light cone of the other? Of course not--this scenario is based entirely on the flat spacetime of SR, so it's easy to see that for any spacelike-separated events in SR, there must be events in the past light cone of one which lie outside the past light cone of the other, no matter how far back in time you go. In fact, as measured in the inertial frame where the events are simultaneous, the distance between the two events must be identical to the distance between the edges of the two past light cones at all earlier times. Also, if we've left enough time for the 1 light-second region to reach equilibrium, this will probably be a lot longer than 1 second, meaning the size of each event's past light cone at t0 will be much larger than the 1 light-second region itself.

The situation is a little more complicated in GR due to curved spacetime distorting the light cones (look at some of the diagrams on Ned Wright's Cosmology Tutorial, for example), but I'm confident you wouldn't see two light cones smoothly join up and encompass identical regions at earlier times--it seems to me this would imply at at the event of the joining-up, this would mean photons at the same position and moving in the same direction would have more than one possible geodesic path (leading either to the first event or the second event), which isn't supposed to be possible. In any case, your argument didn't depend specifically on any features of GR, it just suggested that if the universe had reached equilibrium this would mean that knowing the past light cone of one event in the region would allow a Laplacian demon to predict the outcome of another spacelike-separated event, but my SR example shows this doesn't make sense.

OK, I think I understand your point. The CMB isotropy does not require the whole early universe to be in thermal equilibrium. But, does the evidence we have require the opposite, that the whole universe was not in equilibrium? If not, my hypothesis is still consistent with extant data.

I wasn't saying anything about the big bang being stochastic, just about the initial singularity in GR being fairly "featurless", you can't extrapolate the later state of the universe from some sort of description of the singularity itself--this doesn't really mean GR is non-deterministic, you could just consider the singularity to not be a part of the spacetime manifold, but more like a point-sized "hole" in it. Of course GR's prediction of a "singularity" may be wrong, but in that case the past light cones of different events wouldn't converge on a single point of zero volume in the same way, so as long as we assume the new theory still has a light cone structure, we're back to my old argument about the past light cones of spacelike-separated events never becoming identical.

I don't think your argument applies in this case. For example, the pre-big bang universe might have been a Planck-sized "molecule" of an exotic type, that produced all particles in a deterministic manner.

Probably, but that doesn't imply that GR predicts that the Earth will be attracted to Mars' current position, since after all one can ignore Mars altogether in Newtonian gravity and still get a very good prediction of the movement of the Earth.

Forget about that example. Take Pluto's orbit or the Earth-Moon-Sun system. In both cases the acceleration felt by each object is non-uniform (the distance between Pluto and Sun ranges from 4.3 to 7.3 billion km, during a Sun eclipse the force acting on the Moon differs significantly from the case of a Moon eclipse). However, both systems are well described by Newtonian gravity hence the retardation effect is almost null. I think the main reason is that, quantitatively, the gravitational radiation is extremely small. The Wikipedia article you've linked says that Earth loses about 300 joules as gravitational radiation from a total of 2.7 x 10^33 joules.

If you really think it's plausible that GR predicts Earth can "extrapolate" the motions of Mars in this situation which obviously departs significantly from spherical/cylindrical symmetry, perhaps we should start a thread on the relativity forum to get confirmation from GR experts over there?

I'll do that.

You're right that I don't have a rigorous argument, but I'm just using the following intuition--if you know the current position of an object moving at constant velocity, how much calculation would it take to predict its future position under the assumption it continued to move at this velocity? How much calculation would it take to predict the future position of an object which we assume is undergoing uniform acceleration? And given a system involving many components with constantly-changing accelerations due to constant interactions with each other, like water molecules in a jar or molecules in a brain, how much calculation would it take to predict the future position of one of these parts given knowledge of the system's state in the past. Obviously the amount of calculation needed in the third situation is many orders of magnitude greater than in the first two.

If my analogy with gravity stands (all kinds of motions are well extrapolated in the small mass density regime), the difference in complexity should be about the same as between the Newtonian inverse square law and GR.

No. If our predictions don't "miserably fail" when we ignore Mars altogether, they aren't going to miserably fail if we predict the Earth is attracted to Mars' current position as opposed to where GR says it should be attracted to, which is not going to be very different anyway since a signal from Mars moving at the speed of light takes a maximum of 22 minutes to reach Earth according to this page. Again, in the situations where GR and Newtonian gravity give very different predictions, this is not mainly because of the retarded vs. current position issue.

See my other examples above.
 
  • #235
JesseM,

I've started a new thread on "Special & General Relativity" forum named "General Relativity vs Newtonian Mechanics".
 
  • #236
paw said:
Have I missed something here? Aren't there solutions of the Schroedinger equation that predict accurately the orbitals of the hydrogen atom, the electron density of H2? Didn't solutions of the SE allow development of the scanning tunnelling EM? Aren't DeBroglie waves wavefunctions? Don't they accurately predict the various interferance patterns of electrons? I think the evidence is almost overwhelming.

Of course you could argue that all these phenomena are caused by some property that behaves exactly like the wavefunction, but a gentleman by the name of Occam had something to say about that.:biggrin:If it walks like a duck and quacks like a duck...

Applying Occam's Razor to QM produces an 'instrumentalist interpretation' which is explicitly uninterested in anything untestable, and, instead simply predicts probabilities of experimental results. In other words, as long as there are prediction equivalent theories without a physically real wavefunction, Occam's razor tells us there isn't necessarily one.
 
  • #237
JesseM said:
But if it's really local, each particle should only be allowed to consult its own model of everything in its past light cone back to the Big Bang--for the particle to have a model of anything outside that would require either nonlocality or a "conspiracy" in initial conditions.

In a sense it's a 'small conspiracy' since something like Bohmian Mechanics allows a particle to maintain correlation without any special restriction on the initial conditions that insures correlation.
 
  • #238
NateTG said:
In a sense it's a 'small conspiracy' since something like Bohmian Mechanics allows a particle to maintain correlation without any special restriction on the initial conditions that insures correlation.
Yes, but Bohmian mechanics is explicitly nonlocal, so it doesn't need special initial conditions for each particle to be "aware" of what every other particle is doing instantly. ueit is trying to propose a purely local theory.
 
  • #239
JesseM said:
Yes, but Bohmian mechanics is explicitly nonlocal, so it doesn't need special initial conditions for each particle to be "aware" of what every other particle is doing instantly. ueit is trying to propose a purely local theory.

BM is explicitly non-local because it requires synchronization. The 'instantaneous communication' aspect can be handled by anticipation since BM is deterministic.

Now, let's suppose that we can assign a synchronization value to each space-time in the universe which has the properties that:
(1) the synchronization value is a continuous function of space-time
(2) if space-time a is in the history of space-time b then the synchronization value of a is less than the synchronization value of b.

Now, we should be able to apply BM to 'sheets' of space-time with a fixed synchronization value rather than instants. Moreover for flat regions of space-time, these sheets should correspond to 'instants' so the predictions should align with experimental results.

Of course, it's not necessarily possible to have a suitable synchronization value. For example, in time-travel scenarios it's clearly impossible because of condition (2), but EPR isn't really a paradox in those scenarios either.
 
  • #240
JesseM,

It seems I was wrong about the GR being capable to extrapolate generic accelerations. I thought that the very small energy lost by radiation would not have a detectable effect on planetary orbits. This is true, but there are other effects, like Mercury's perihelion advance.

I'm still interested about your oppinion regarding the big-bang though.

You said:

Of course GR's prediction of a "singularity" may be wrong, but in that case the past light cones of different events wouldn't converge on a single point of zero volume in the same way, so as long as we assume the new theory still has a light cone structure, we're back to my old argument about the past light cones of spacelike-separated events never becoming identical.

We know that GR alone cannot describe the big-bang as it doesn't provide a mechanism by which the different particles are created. So, if the pre-big-bang universe was not a null-sized object, but with a structure of some sort, and if it existed long enough in that state for a light signal to travel along the whole thing, would this ensure identical past light cones?
 
  • #241
NateTG said:
Applying Occam's Razor to QM produces an 'instrumentalist interpretation' which is explicitly uninterested in anything untestable, and, instead simply predicts probabilities of experimental results. In other words, as long as there are prediction equivalent theories without a physically real wavefunction, Occam's razor tells us there isn't necessarily one.
I disagree. A wavefunction is a much simpler thing than the collection of all humans and their experiments. Occam would tell you to derive the latter from the former (as in MWI) rather than somehow taking it as given.
 
  • #242
Ontoplankton said:
I disagree. A wavefunction is a much simpler thing than the collection of all humans and their experiments. Occam would tell you to derive the latter from the former (as in MWI) rather than somehow taking it as given.

Well, ultimately, it comes down to what 'simplest' means. And that requires some sort of arbitrary notions.
 
  • #243
I don't know... it's easy to specify a wavefunction; you just write down some equations, and then without any complex further assumptions, you can talk about decoherence and so on to show that humans and their experiments are structures in the wavefunction. But how do you specify the collection of humans and their experiments, without deriving it from something more basic? I think any theory that's anthropocentric like that is bound to violate Occam.
 
  • #244
We have one theory which says that:
1. We can predict experimental results using some method X
2. There are things that are not observable used in X.
3. These unobservable things have physical reality.
And another theory that says:
1. We can predict experimental results using the same method X.
2. There are things that are not observable used in X

Even considering that 'physical reality' is a poorly defined notion, it seems like the latter theory is simpler.
 
  • #245
The crucial difference here being that in the former theory, 1 is explained by 2 and 3, whereas in the latter theory, 1 is an assumption that comes from nowhere. Occam is bothered by complex assumptions, not complex conclusions. Once you've explained something, you can cross it off your list of baggage.

Also, the latter theory isn't complete; either the unobservable things exist or they don't, and you have to pick one.
 
Last edited:
  • #246
1. We can predict experimental results using QED.
2. The 4-potential A^{\mu} is unobservable.

Surely we don't have to make a choice, but rely on experiment ?
 
Last edited:
  • #247
Ontoplankton said:
I disagree. A wavefunction is a much simpler thing than the collection of all humans and their experiments. Occam would tell you to derive the latter from the former (as in MWI) rather than somehow taking it as given.
Actually, it is quite possible that you can do without a wavefunction (I guess Occam would be happy):smile: In http://www.arxiv.org/abs/quant-ph/0509044 , the Klein-Gordon-Maxwell electrodynamics is discussed, the unitary gauge is chosen (meaning the wavefunction is real), and it is proven that one can eliminate the wavefunction from the equations and formulate the Cauchy problem for the 4-potential of electromagnetic field. That means that if you know the 4-potential and its time derivatives at some moment in time, you can calculate them for any moment in time, or, in other words, the 4-potential evolves independently.
 
Back
Top