Relativity & Quantum Theory: Is Locality Violated?

  • Thread starter UglyDuckling
  • Start date
  • Tags
    Relativity
In summary, Special Relativity is violated because information is not transferred between two systems that are spatially separated.
  • #176
Hurkyl said:
Pointless semantic distractions?? Does that mean you no longer care to assert that what I put forth as an alternative to "locality" is actually Bell-locality?

I don't know what you're referring to. What did you put forth as an alternative to "locality"? And I think I'm just confused about what you're asking here: I'm the one who thinks Bell Locality is a perfectly good definition of locality; so if your proposed alternative "is actually Bell Locality" wouldn't that make it not really an alternative at all?



The random numbers that are "generated" are the manifestation -- they are not any sort of dynamical entities, and they do not have any sort of effect on anything. They are nothing more than the result when you insist that a stochastic theory produce an actual outcome.

(A stochastic theory, of course, doesn't like to produce outcomes... it prefers to simply stick with a probability distribution on the outcome space)

The outcomes appear in some physical form -- like, in your example, the positions of a bunch of electrons that make a phosphor screen light up a big glowing green "H" or something. Perhaps this just takes us back to our earlier argument about what constitutes a "theory". I'm taking it for granted that there exist physical things like video screens and electrons, and asking about theories which might explain the underlying dynamics of whatever is at the next-level-down. You (still) seem to think it's ok to assert as a theory some mathematical/probabilistic statement like "P(HH)=.5, P(TT)=.5, P(HT)=P(TH)=0". That may be a correct description of the observed outcomes, but (the way I am thinking about this, as a physicist) it is *not* a *theory*.

If we accept as a given that the observed result is (say) produced by electrons landing "here" instead of "there" on the screen, then your proposed stochastic theoretical explanation of the observations better include some way for the random numbers to affect electrons. If they "do not have any sort of effect on anything" then you are just spinning your wheels, failing in principle to propose the kind of thing that could ever possibly address the issue at hand.

Really, this comes down to the old objection that one could just take the quantum mechanical formalism as a blind algorithm, which makes no claims about any beables... and hence make correct predictions without ever asserting anything that could possibly construed as violating local causality. Of *course* one can do this. One can avoid making nonlocal claims by refusing to claim anything about anything. Duh. But we *know* that big macroscopic things exist, and we *know* they're made of littler things. The question is: is it possible that the dynamics of the little stuff (or the sub-little stuff, or whatever) respects local causality? Bell gave a theorem that the answer is no: no locally causal (bell local) theory can account for what's observed.

Putting tape over your mouth and refusing to assert a theory does not constitute a counterexample to this theorem.
 
Physics news on Phys.org
  • #177
You've got to be kidding? First off, the unitary evolution is deterministic. Second, it *doesn't* "explain the correlation just fine" since it predicts that Alice's box never ever reads definitely "H" or definitely "T" -- in direct contradiction with what Alice (by assumption in *your* example) sees.
Unitary evolution provides us with a state:

(|HH> + |TT>) / sqrt(2)

from which it's easy to derive the correlation. Furthermore, if we actually conduct an experiment to test if there's a correlation, unitary evolution provides us with the resulting state

(|HH> + |TT>)|correlated> / sqrt(2)

As opposed to the state we'd get when there wasn't any entanglement at all:

((|HH> + |TT>)|correlated> + (|HT> + |TH>)|uncorrelated>) / 2



ttn said:
Are you still talking about unitary-only QM?
I'm talking about any sort of statistical theory.


Before I respond to the next part, allow me to remind you of your post #133 that launched this particular arc:
ttn said:
Maybe it would be useful to ask: could anyone think of a Lorentz invariant candidate toy theory that would predict the "both H or both T" example above?
...
Do we allow (as consistent with relativity) that irreducibly-random events at spacelike separations should nevertheless demonstrate persistent correlations?
...
Is such a thing consistent with relativity? I say "no" and am thus not at all bothered (but rather relieved) that such a scenario violates Bell's local causality requirement.
...
could anyone think of a Lorentz invariant candidate toy theory that would predict the "both H or both T" example above?

Hurkyl said:
We don't need to consider space-like separated events to talk about locality. One nice and practical definition of locality is: "Are all the beables here sufficient to describe what's going to happen?"
ttn said:
But this is precisely the condition Bell Locality! That condition can be stated: ... throwing some additional information about spacelike separated regions into the mix doesn't *change* the probabilities.
Hurkyl said:
No! That part in red is what I did not say.

All of the beables here are sufficient to fully describe what happens here.
ttn said:
The whole point of this condition is to ask: are the beables of a theory sufficient to explain certain observed facts in a local way? For your example of the irreducibly-random theory which purports to explain the HH/TT correlation, Bell Locality is violated: a complete specification of beables along some spacelike surface prior to both measurement events does *not* screen off the correlation.
Yes -- it's Bell Locality that is violated: in particular, it's statistical independence hypothesis. But other forms of locality, such as what I stated here, are not violated.

First off, notice that your first question is very circular. Filling in the implicit stuff (as I understand it), you say:

"The whole point of the Bell locality condition is to ask: are the beables of a theory sufficient to explain certain observed facts in a Bell local way?"

But you did not ask for a toy theory that was Bell local: you asked for a theory that was consistent with Lorentz invariance: with special relativity. (In fact, isn't the whole point of this thread to ask the question of consistence with special relativity?)

Bell locality is, indeed, violated, because one of its underlying assumptions is that there is no statistical dependence. By looking at all of the responses through the filter of Bell locality, you are, in fact, asking:

"Is there any theory consistent with special relativity that is capable of predicting statistical dependence, under the condition that there is no statistical dependence?"


ttn said:
I'm sorry, but every time you start analyzing probabilities and such, you turn into a mathematician -- i.e., you completely forget about the physical situation that we're talking about here.
I am a mathematician, incidentally.

ttn said:
What you have now lapsed into calling the "statistical independence hypothesis" is the *physical* requirement
You try to make it sound important by calling it a "physical requirement" -- but that amounts to nothing more than saying that it's an axiom that you wish to require your mathematical models of the physical universe to satisfy.

ttn said:
Yes, one can *deduce* from this "statistical independence" -- a complete specification of beables in the past of the two events should screen off any correlations between the outcomes.
Try me.

ttn said:
Let me ask you a serious question: have you ever read Bell's papers on this stuff?
I've read a few papers, including stuff you have linked in the past. I never bothered to pay attention to who the author is.
 
  • #178
ttn said:
If we accept as a given that the observed result is (say) produced by electrons landing "here" instead of "there" on the screen, then your proposed stochastic theoretical explanation of the observations better include some way for the random numbers to affect electrons.
Of course the random variables have an effect on electrons.

But this has absolutely nothing to do with the idea of a random number generator you seem to be using in post #171.


You seem to have in your mind that a stochastic universe would be analogous to how a computer program will use a pseudorandom number generator to spit out a sequence of numbers, and then use those numbers to control how things dance across its screen.

But that's not how statistics works! A random variable is nothing more than a measure on a space of outcomes. In fact, it is a very difficult problem to try and give any sort of precise meaning to the word "random number generator".


(In a classical theory)
E.G. one random variable could be on the space of possible positions and momentums of an electron. Another random variable could be on the configurations of the electromagetic field. The dynamics of the theory would allow us to compute a new random variable on the space of possible positions, momentums, and accelerations of the electron.

(Of course, this is just marginalized from the joint distribution over the electron position, momentum and acceleration and the electromagnetic field configuration)
 
Last edited:
  • #179
Hurkyl said:
Unitary evolution provides us with a state:

(|HH> + |TT>) / sqrt(2)

from which it's easy to derive the correlation.

"easy to derive" is irrelevant. The state is already *empirically wrong*. What exists is not a superposition of HH and TT; what exists is *either* HH *or* TT.

You seem to be using/assuming the MWI without being willing to admit the well-known weirdness of such a view. Sure, unitary evolution can get you that superposition, and you can twist and turn and eventually connect this up with what we do experience (by denying that what we see in front of our face is the truth, i.e., by postulating that we're all deluded about what the outcomes of the experiments actually were). But normally when a physicist asks if some theory or other "explains the data" he or she is not looking for a metaphysical-conspiracy-theory about how, really, the data we got directly from looking at an apparatus is a delusion.



Yes -- it's Bell Locality that is violated: in particular, it's statistical independence hypothesis. But other forms of locality, such as what I stated here, are not violated.

I'm sorry, I don't follow you. The alternative you proposed (as near as I can tell) was:

"Are all the beables here sufficient to describe what's going to happen?"

But as I pointed out, this just *is* the Bell Locality condition. Maybe we're not on the same page about what the phrase "sufficient to describe what's going to happen" means. I told you what that phrase means for Bell Locality, but I don't understand what, if anything, you're proposing as an alternative. You seemed to simply reject my proposal for the meaning of that phrase on the grounds that it presupposed the statistical independence hypothesis, but that simply is not true.



First off, notice that your first question is very circular. Filling in the implicit stuff (as I understand it), you say:

"The whole point of the Bell locality condition is to ask: are the beables of a theory sufficient to explain certain observed facts in a Bell local way?"

But you did not ask for a toy theory that was Bell local: you asked for a theory that was consistent with Lorentz invariance: with special relativity. (In fact, isn't the whole point of this thread to ask the question of consistence with special relativity?)

I asked for a theory that was consistent with SR. You just postulated some joint probabilities without ever providing a theory.


Bell locality is, indeed, violated, because one of its underlying assumptions is that there is no statistical dependence.

I'm sorry, but saying this over and over again doesn't make it so. What you are calling "no statistical dependence" is equivalent to the factorization of the joint probability, yes? Here's what Bell says about this: "Very often such factorizability is taken as the starting point of the analysis. Here we have preferred to see it not as the *formulation* of 'local causality', but as a consequence thereof." [from La Nouvelle Cuisine, page 243 of Speakable and Unspeakable, 2nd edition] I've tried and tried to explain this, without success, so I'll just have to refer you to that paper where Bell explains very clearly what the local causality condition is, and how factorization ("statistical independence") follows as a logical consequence. Factorization is *not* simply assumed; it is a consequence of a *physical* assumption -- namely, that there be no superluminal causation.



By looking at all of the responses through the filter of Bell locality, you are, in fact, asking:

"Is there any theory consistent with special relativity that is capable of predicting statistical dependence, under the condition that there is no statistical dependence?"

Obviously I don't agree. I'd say: by thinking (erroneously) that Bell Locality means nothing but statistical independence, you are missing the whole point. Incidentally, I find it interesting that you cannot apparently resist converting Bell Locality (which is a *physical* condition) into factorizability (which is a purely mathematical condition).



I am a mathematician, incidentally.

Not that I think there's anything wrong with that, but I'm not surprised.


You try to make it sound important by calling it a "physical requirement" -- but that amounts to nothing more than saying that it's an axiom that you wish to require your mathematical models of the physical universe to satisfy.

Precisely right. I take as an axiom that physical theories should respect relativistically local causation. And then it is proved that no theory consistent with that axiom can agree with the data. So I say "oops!" I guess that axiom is *false*. No locally causal theory can explain the data.



I've read a few papers, including stuff you have linked in the past. I never bothered to pay attention to who the author is.

I don't think I've ever linked to Bell's papers, because I don't know of any of them being online. Anyway, take it as a friendly recommendation. Bell is a brilliant physicist and a brilliant writer, and if you want to understand where I am coming from you would probably be better off just reading Bell in the original than listening to me. (I am far less brilliant.) Because everything I'm saying here, Bell already said, and said better. Plus, the reason I get so hot under the collar about this stuff is that, despite the incredible clarity of Bell's writings, he has been almost universally misunderstood by the "experts" on these topics. (I quoted Bell himself pointing that out yesterday.) So I find it extremely frustrating that people have such strong opinions on what Bell did or didn't prove, when they haven't even bothered to read Bell's papers. You are obviously a smart guy who has enough background knowledge to get completely clear on these issues; so I say, if you *want* to get clear on these issues, if these are issues you are *interested* in, then it would be a shame if you didn't read Bell's papers.
 
  • #180
ttn said:
Please don't tell me I've spent all this time trying to explain things to you, only to have *this* appear as your considered view. :grumpy:

Sure, you can explain EPR/Bell data with a theory in which the causes of certain events come from the future. Do you seriously think such a theory would be "locally causal"?

No, I am not actually saying this is my opinion since I drift towards oQM most of the time. I am merely pointing out one possibility. Does it really seem so weird that the future might influence the past? And, yes, I definitely consider such a theory to be local in every sense of the word. But it is not realistic. So it would be local non-realistic, and therefore consistent with Bell's Theorem.

And to counter your assertion ("no Bell local theory can agree with experiment"), I instead state that "no Bell realistic theory can agree with experiment". Bell realistic meaning: any theory in which there is a more complete specification of the system than the HUP allows. You cannot beat the HUP!

And please, do not bother with BM as a candidate. I am talking about a theory in which the HUP is beaten. EPR thought they had it, but experiment showed otherwise. If you can't beat the HUP, even in principle, then you are acknowledging that there are no hidden variables in the first place.
 
  • #181
DrChinese said:
No, I am not actually saying this is my opinion since I drift towards oQM most of the time. I am merely pointing out one possibility. Does it really seem so weird that the future might influence the past? And, yes, I definitely consider such a theory to be local in every sense of the word. But it is not realistic. So it would be local non-realistic, and therefore consistent with Bell's Theorem.

And to counter your assertion ("no Bell local theory can agree with experiment"), I instead state that "no Bell realistic theory can agree with experiment". Bell realistic meaning: any theory in which there is a more complete specification of the system than the HUP allows. You cannot beat the HUP!

And please, do not bother with BM as a candidate. I am talking about a theory in which the HUP is beaten. EPR thought they had it, but experiment showed otherwise. If you can't beat the HUP, even in principle, then you are acknowledging that there are no hidden variables in the first place.


Sigh. I count at least 6 major confusions here. (1. reverse-temporal causation certainly is *not* "local in every sense of the word". 2. A theory with reverse-temporal causation, assuming such a thing could even be made well-defined, could be 100% "realistic". 3. A "local non-realistic" theory is not consistent with Bell's theorem anyway, if what you mean is what Bell meant: the full, two-part argument that no local theory, realistic or not, can agree with the empirical predictions of QM. 4. What you call my "assertion" is actually something that has been proved rigorously, unlike the vague and arbitrary statement you seem to want to "counter" me with. 5. The meaning of "You cannot beat the HUP" depends crucially on the meaning of "HUP" -- if one takes HUP as a restriction on the simultaneous *reality* of certain variables, then you are, like Bohr, just rejecting the conclusion of EPR without demonstrating any error in the argument; and if one takes HUP as merely a restriction on simultaneous *knowledge* of certain variables, then something like BM *does* count as a "candidate" since it makes the same empirical predictions as quantum theory and yet has particles following definite trajectories. 6. No experiment ever "showed otherwise", i.e., refuted the EPR argument; with the help of Bell's theorem we now know that the kind of theory EPR lobbied for is not empirically viable; but this does *not* mean that experiment has refuted the argument they used to arrive at that belief; the argument might be valid, but the *premises* false.)

This last is the most crucial. EPR believed in locality. EPR also constructed an argument for the proposition that "Locality --> Hidden Variables". Putting these together, they proposed that a local hidden variables theory should be sought to replace orthodox QM.

We now know from Bell that such a theory cannot work. Does this mean that EPR were wrong? Yes, in the sense that the kind of theory they said they thought should be sought turns out to be impossible. But does this mean that their *argument* for the statement "Locality --> Hidden Variables" was flawed? No! It means only that *either* that argument was flawed, or the *other premise* ("Locality") is false.

Nobody has ever pointed out a flaw in the EPR argument (widespread opinion to the contrary notwithstanding). Indeed, the argument has been re-formulated in rigorous terms several times recently. So that leaves no choice but to blame the empirical violation of Bell's inequalities on that first premise, "Locality".

Here it is again in slow motion:

EPR: Locality --> HV's

Bell: Locality + HV's --> X

Experiment: X is false

Conclusion: Locality is false.

Of course, as this thread has certainly made clear, this conclusion is only *interesting* for those who believe that the sense of "locality" needed to make the argument go through, is something that we ought to believe in the first place based on relativity theory. There are some people who deny that (for reasons that don't make any sense to me, but whatever). My point here is just that saying "experiment refuted EPR" represents, as Bell once said about the critics of Einstein, "misunderstanding [that] could hardly be more complete."
 
  • #182
ttn said:
I don't agree; this is not deterministic. There could be irreducible stochasticity in the initial assignment of a value to the "scalar field."

Well, that's still "deterministic" in my book, in that there are quantities out there, which were there "from the beginning", but of which we simply DIDN'T KNOW things. These are exactly the "hidden variables", the hidden beables, hidden because of some or other principle (or practical situation) which makes it impossible for us to know anything more about it than a probability distribution.


I see no reason to postulate the existence of any physical scalar fields. The point is too simple to deserve such fanciness: you could have a theory in which there is irreducible randomness (the production of some random number from some kind of probability distribution), but in which that number (whatever it turns out to be) is then "available" at other spacetime events to affect beables. And my point is simple: if it is only available at spacetime points in the future light cone, the theory is local; if it's available also outside the future light cone, the theory is nonlocal.

But this "being available" is exactly the notion of a beable, no ? It then is a part of the description of the "physics" (hidden or not). In that, you suppose the quantity to HAVE a value, but you simply DON'T KNOW WHICH. So IF you would know the value, then the theory would be deterministic, right ?
I mean, you're thinking of one or other phenomenon which is "making the detector click" or not, and if only we knew its value, then we would KNOW in advance whether the detector would click or not. But as such, the stochasticity is still reducible. It can be "irreducible" for us if it is *in principle* unknowable, but it can still be "hidden determinisitic", in that one COULD think up a constant field over spacetime with a random value, which is then going to determine the outcomes.
And when this is potentially possible, you have Bell's condition.
But let us now take quantum theory in an extremist Copenhagen view: there are only macroscopic bodies, which follow strictly classical physics, except that in the equations of motion, we have to introduce random events. In order to know the statistical distribution of these random events, we use quantum mechanics, which is however, not supposed to even describe a microscopic world. Electrons and atoms don't exist. Just macroscopic bodies. But we "pretend" there to be microscopic objects, and wavefunctions and all that, but this is nothing but a big game, which has only one purpose: calculate the statistical distribution of the random actions on the classical dynamics of macroscopic bodies.
Well, that statistical distribution, deemed to be irreducibly stochastic (that means, "it just happens that way" and there's no underlying mechanism which causes it ; all the QM formalism is just a trick to calculate it but doesn't represent anything) is what it is.
And how do we check whether it is compatible with the geometry of spacetime ? Well, we calculate a set of probabilities of outcomes from the point of view of one observer. And then we do the same for another observer, which suffers a lorentz boost. And guess what ? They come to the same statistical predictions. It is not possible to derive a "preferred reference frame" from these statistical predictions. THIS is the one and only condition that is necessary for this theory to be COMPATIBLE with the spacetime geometry.
Whether this is to be called "local" or not is your business. "Local" really has only a strict meaning in the case of deterministic theories, where THE outcome (not the *probability* of an outcome, because probability is an epistemological concept) at an event E is DETERMINED by what's in the past lightcone of E ; and this locality is even only needed to what "we can really influence and know in the lab".
In a deterministic theory, locality is needed to avoid the "I can change the future so as not to produce what I learned about the future" paradox.
In a purely stochastic theory, what remains of this requirement to avoid a paradox is "information locality", for the same reason.
And "Bell locality" is an EXTRA REQUIREMENT one can postulate, for one's own liking, and which comes in fact down to requiring that statistical effects in a theory are derivable from an underlying deterministic, local, theory.



I don't understand this attitude at all. Beables are beables. I'm happy to permit, under the banner of "irreducibly stochastic theories", theories in which the evolution of beables is non-deterministic.

Ok, but then you cannot require anything a priori about THESE random elements, and you seem to do so! Why cannot these elements of randomness have correlations ?

But as I said before, what would be the *point* of the randomness if it didn't affect the beables? It would then have no effect on *anything* because there *is* (by definition) nothing but the beables! You seem to want to parse "irreducibly stochastic theories" as something in which, in addition to the beables, there are these other "things" that "exist", except that they are "random" in the sense that they don't exist in any particular measure/degree/value/whatever. But "random" isn't the same as "indefinite".

Exactly ! Irreducible randomness, to me, is something akin of "the will of the gods". This, as compared to "randomness induced by lack of knowledge".

You say that as soon as one assigns physical existence to the random quantities, the theory becomes deterministic. I could not disagree more strongly. First, if you *don't* assign physical existence to the random quantities, what the heck is the point? They then play absolutely no role in the dynamics. And second, whether you do or don't assign physical existence to the random quantities, has no bearing whatever on whether the theory is deterministic. A theory in which there is randomness which affects things, is *not deterministic*. For example: orthodox QM (with the collapse postulate) is *not* a deterministic theory, even though there is irreducible randomness (which of the eigenstates the initial state collapses to) and the "outcome" of this "random choice" manifests itself immediately in the beables (the wave function is now that eigenstate).

In orthodox QM, the wavefunction is not a beable. It's a trick to do calculations of a probability distribution. A way to quantify the will of the gods, if you want to.

I will agree with you that I have difficulties with such a view too (hence my preference for MWI, where at least there IS something underlying). And of course in the more von Neuman approach, where the wavefunction is a beable, you're perfectly right that it is non-local.

But (though it is not my view) if you simply see QM as a "trick to calculate irreducible stochastical influences on a classical dynamics of macroscopic bodies" and hence deny the existence of a microscopic world, I think you have no clash per se with the Minkowski geometry of spacetime (in which only these macroscopic bodies live of course, not the non-existing microworld). The calculated probabilities do not allow you to find a specific reference frame and they also do not allow you to create a paradox (thanks to information locality).
I simply think that in such a case, "locality" has not much meaning beyond these statements.
It is only when you want to deny the irreducible stochastical character of the probabilities calculated thanks to the formalism of quantum theory, and when you try to think of a MECHANISM (involving microscopic beables) that you run into problems. And when you consider the "projection of the wavefunction" as something physically happening, you have a bluntly non-local process of course.
 
  • #183
ttn said:
Here it is again in slow motion:

EPR: Locality --> HV's

Bell: Locality + HV's --> X

Experiment: X is false

Conclusion: Locality is false.

*Sigh* (that's a joke, by the way)...

EPR did NOT make the above argument in their paper, they said: If QM is complete, then there cannot be simultaneous reality to non-commuting observables. And I agree with this conclusion.

What they did not know, but would have been surprised to learn, is that Aspect-like experiments would NOT yield more information on the separated particles than would be allowed by the HUP. They believed, but did not prove, that the HUP could be beaten. So far, the HUP still stands. Ultimately, that is the heart and soul of the debate.
 
  • #184
DrChinese said:
EPR did NOT make the above argument in their paper, they said: If QM is complete, then there cannot be simultaneous reality to non-commuting observables. And I agree with this conclusion.

That's not a conclusion, it's a definition/elaboration of "completeness". The actual argument was for the conclusion that there *is* simultaneous reality to non-commuting observables -- and hence that QM is *not* complete. And the argument was based crucially on locality, a fact you don't seem to appreciate at all.

I guess, since even Einstein thought that Podolsky's text buried the main point under pointless "erudition", Podolsky, and not you, should be blamed for your confusion over the point and content of the EPR argument.



What they did not know, but would have been surprised to learn, is that Aspect-like experiments would NOT yield more information on the separated particles than would be allowed by the HUP. They believed, but did not prove, that the HUP could be beaten. So far, the HUP still stands. Ultimately, that is the heart and soul of the debate.

My disagreement could not be more complete.

If you think the point of *either* EPR or Bell's Theorem (or Aspect's experiments) was to actually, in practice, "yield more information ... than would be allowed by the HUP" you have completely and totally missed the whole point of this entire debate.

Sigh. (not a joke)
 
  • #185
ttn said:
What you are calling "no statistical dependence" is equivalent to the factorization of the joint probability, yes?
Yes -- when there is no statistical dependence between A and B, we have P(A and B) = P(A) P(B) and we have P(A|B) = P(A).

ttn said:
Obviously I don't agree. I'd say: by thinking (erroneously) that Bell Locality means nothing but statistical independence, you are missing the whole point. Incidentally, I find it interesting that you cannot apparently resist converting Bell Locality (which is a *physical* condition) into factorizability (which is a purely mathematical condition).
I don't think Bell Locality means nothing but statistical independence -- but it's that aspect of Bell Locality from which this whole debate stems. (At least the part in which I'm involved)

I'm willing to grant the other aspects of Bell Locality, just not the assumption of statistical independence...
Bell said:
Very often such factorizability is taken as the starting point of the analysis. Here we have preferred to see it not as the *formulation* of 'local causality', but as a consequence thereof.
... or whatever Bell happened to be assuming that is equivalent to assuming statistical independence.

In one of the papers you linked before, Bell Locality was formulated in terms of three postulates: parameter independence, statistical independence (I believe it was called "observation independence"), and something else I can't remember.


ttn said:
I take as an axiom that physical theories should respect relativistically local causation. And then it is proved that no theory consistent with that axiom can agree with the data. So I say "oops!" I guess that axiom is *false*. No locally causal theory can explain the data.
Wait a minute, I thought it was proved no Bell Local hidden variable theory could agree with the data. :tongue:


I'm going to rewind back to the original theory I posted, where I simply posited the existence of a pair of coins that were governed by a joint probability distribution. Upon reflection, I realize that there is a problem with this, but for reasons entirely different than what you've said. So let me make a slight modification before we continue.

Let us start with special relativity, but also add in some additional postulates:
(1) There exist objects called "magic coins", and they come in pairs.
(2) Magic coins can be in one of three states (in addition to whatever SR says): U, H, or T.
(3) Any two pairs of magic coins are otherwise identical.
(4) There is some sort of interaction called "flipping" that can be triggered in a laboratory setting that causes a magic coin in the U state to nondeterministically transition into the H or T state.
(5) Otherwise, magic coins do not change their state.
(6) The flipping interaction for each pair of magic coins is governed by the joint probability distribution P(HH) = P(TT) = 1/2, P(HT) = P(TH) = 0. (And the distribution over all pairs of coins factors into those of the individual pairs)


The important things to note are:

(A) The theory is nondeterministic. There is nothing to determine whether the coin undergoes U-->T or U-->H.

(B) The probabilities are understood in the frequentist interpretation: we say the probability of an event E is p when the ratio of the number of times event E occurs over the number of (identical) experiments approaches p as the number of experiments goes to infinity.

In particular, this means probabilities represent nothing more than asymptotic behavior: it is entirely nonsensical to try and use them to describe an individual experiment.


It follows from the axioms of the theory that:

For the experiment where we take a magic coin and flip it, we have P(H)=1/2.

For an experiment where Alice and Bob take a pair of magic coins and flip them, we have

P(Bob sees H | Alice sees H) = 1.

and

P(Bob sees H | Alice sees T) = 0.


It does not follow from the axioms of the theory that it is impossible for Bob to see H and Alice to see T. (But the probability of it happening is zero)


This theory claims to be complete and local in the respect that everything that can be determined can be entirely determined with the local beables.


The important thing this is trying to convey relates to the usage of probabilities. Normally, (as far as I can tell) the usage of statistics in physics is either entirely aphysical, or it is based upon a very shaky logical foundation.

For example, the frequentist definition of probability requires examining a hypothetical infinite sequence of similar experiments. But we have problems such as:
(1) The limiting ratio may not be well defined.
(2) The sequence of events is hypothetical, and cannot be physical.
(3) We don't know how to classify experiments as similar! (Well, we do know one way, but then we'd never see probabilities other than 0% or 100%)

But, at least in the above theory, we can put the frequentist definition on a rigorous footing -- since there are no factors that affect the outcome of a coin flip, it's clear that we can consider any two coin flipping experiments as similar. (And experiments involving multiple coins are similarly easy) We don't have to worry if we can define a hypothetical sequence of experiments and if the limiting ratio will be defined, because one of the axioms of the theory is that we can do so without any worries.



ttn said:
You seem to be using/assuming the MWI without being willing to admit the well-known weirdness of such a view. Sure, unitary evolution can get you that superposition, and you can twist and turn and eventually connect this up with what we do experience (by denying that what we see in front of our face is the truth, i.e., by postulating that we're all deluded about what the outcomes of the experiments actually were).
I do prefer a view that's MWI-like, although I do not know if it leads to MWI. I don't like a nondeterministic theory -- I would prefer to make a statistical theory, and take statistics seriously.

In this interpretation, when we say something is a random variable, we do not mean that it is something that can acquire an outcome! That means that it is something that assigns nonnegative real numbers to the possibles outcomes that add up to one.

Other interpretations suffer from two very difficult philosophical problems:
(1) They are nondeterminsitic. They assert that there is no reason the measurement turns out the way it did... it just did. But at least we have this probability distribution that describes the results!

(2) It is mysterious why and how the frequentist definition of probability manages to describe anything!


But my interpretation solves both problems.

(1) It is a deterministic theory of random variables.
(2) Probabilities are fundamental elements of reality -- so we don't have to use the frequentist definition to talk about probabilities.

It does raise the philosophical issue about why it looks like we see outcomes, but that question does have an answer.



But, if you insist that it's too radical of an approach, we can stick with nondeterminism, and assert that quantum mechanics without the collapse postulate is capable of describing everything that can be described -- it has a unitarily evolving "state of the universe", and the only other things that can possibly be described are the probabilities involving the outcomes of measurements. But, as you remember, the classical usage of probabilities are statements about asymptotic behavior, and not statements about individual events.
 
  • #186
vanesch said:
Well, that's still "deterministic" in my book, in that there are quantities out there, which were there "from the beginning", but of which we simply DIDN'T KNOW things. These are exactly the "hidden variables", the hidden beables, hidden because of some or other principle (or practical situation) which makes it impossible for us to know anything more about it than a probability distribution.

I think we're talking past one another. I wasn't saying that the quantities out there had values "from the beginning." I think my use of the word "initial" was confusing. I meant only: from the moment at which the stochastic part of the dynamics does its thing. Before that moment, this value didn't exist. At that moment, it was randomly produced (and then "exists" in the same way that anything else real exists).




But this "being available" is exactly the notion of a beable, no ?

Yes, sure. I would assume that the random numbers that an irreducibly stochastic dynamics generates, are beables (or are somehow or other "encoded" in the beables). I mean, seriously, the beables are (by definition) all that exists. This is what I was saying before: if your stochastic dynamics doesn't (randomly) affect the beables, then it doesn't affect *anything* and you might as well just get rid of that part of the dynamics.


It then is a part of the description of the "physics" (hidden or not). In that, you suppose the quantity to HAVE a value, but you simply DON'T KNOW WHICH. So IF you would know the value, then the theory would be deterministic, right ?

We should forget about knowledge here. It's causing unnecessary confusion. Just imagine that we're god; we're omniscient about all the beables. I'm perfectly happy to accept as a logical possibility that the evolution equations for the beables (i.e., the dynamics of the theory) aren't deterministic. That means: even god, in his omniscience, can't predict in advance how certain things will go. There's just no present fact that uniquely determines the future. This is all just what we mean when we say a theory is irreducibly stochastic. Yes?


I mean, you're thinking of one or other phenomenon which is "making the detector click" or not, and if only we knew its value, then we would KNOW in advance whether the detector would click or not.

No, I'm not assuming or requiring that going in. My argument (well, Bell's... well, EPR's!) is that a *local* stochastic theory cannot explain the perfect correlation which is observed (and which QM predicts) when Alice and Bob measure along the same axis. This can be made rigorous if "local" means "Bell Local" (though I understand you for some reason think that would be circular since Bell Locality already tacitly assumes determinism.



But let us now take quantum theory in an extremist Copenhagen view: there are only macroscopic bodies, which follow strictly classical physics, except that in the equations of motion, we have to introduce random events. In order to know the statistical distribution of these random events, we use quantum mechanics, which is however, not supposed to even describe a microscopic world. Electrons and atoms don't exist. Just macroscopic bodies. But we "pretend" there to be microscopic objects, and wavefunctions and all that, but this is nothing but a big game, which has only one purpose: calculate the statistical distribution of the random actions on the classical dynamics of macroscopic bodies.

I'm not sure such a thing is sufficiently well-defined for purposes of assessing its locality. Remember, to apply the criterion of Bell Locality, one needs some candidate for what a "complete description of beables" consists of. And then one also needs sufficient formal dynamical laws to map the evolution of beables onto experimental results somehow. And there seems to be a disconnect there -- which amounts basically to the measurement problem -- for this "extremist Copenhagen" view. For there exists something (the wave function) which is both essential to the calculation of probabilities for measurement outcomes, and specifically claimed *not* to be a beable. Oh and this mysterious entity evolves according to dynamical laws which are manifestly not lorentz invariant.

As I see it, there are simply two possibilities: the wave function is, or is not, a beable. If it is, then it's quite clear that the theory is nonlocal (but in agreement with experiment). If it isn't a beable, then we are obliged to calculate probabilities for empirical outcomes based on whatever *are* the beables (which I guess would have to be something else macroscopic) in which case I think it is obvious that we cannot make the correct predictions anymore (since basically there is nothing left of the theory)... in which case the question of whether the thing is local or not is completely moot.

"Local" really has only a strict meaning in the case of deterministic theories, where THE outcome (not the *probability* of an outcome, because probability is an epistemological concept)

not in a stochastic theory, it isn't!

at an event E is DETERMINED by what's in the past lightcone of E ; and this locality is even only needed to what "we can really influence and know in the lab".

No. I don't know how better to say it. I am perfectly willing to allow a stochastic theory. Stochastic vs deterministic, and local vs nonlocal, just aren't the same issue. Maybe you're right that it is simplest to understand the meaning of "locality" for deterministic theories, where THE outcome is DETERMINED by what's in the past lightcone of E (for a local theory) but, say, is not determined by what's in the past lightcone of E (but is instead determined by stuff happening at spacelike separation) for a nonlocal theory.

But... and here is the fundamental thing we don't seem to be able to get on the same page about... isn't the *obvious* way to generalize this to say: for stochastic theories, it makes no sense to talk about THE outcome that is DETERMINED. The whole *meaning* of a stochastic theory is that many outcomes are in principle possible, and all the theory can do is state the PROBABILITIES for the VARIOUS POSSIBLE outcomes (based on the state of the beables in the past light cone). But then the definition of locality that works fine for deterministic theories, simply goes over in the obvious way: the PROBABILITIES for the VARIOUS POSSIBLE outcomes should be "fixed" (not the OUTCOMES should be fixed, but the PROBABILITIES for different possible outcomes should be fixed) by a complete specification of beables in the past light cone. That's it. It's perfectly OK for something irreducibly random to happen that brings about the particular event E -- but still, even in an irreducibly stochastic non-deterministic theory, we can ask: does the theory predict that the probabilities for these different possible happenings depends on stuff going on at spacelike separation? If so, the theory (though still genuinely, irreducibly stochastic) is nevertheless nonlocal.

It's like this. Take one of Hurkyl's U/H/T magic coin flipping boxes (but forget about pairs of them; there's just the one). And suppose you have some theory according to which:

* If the box is in the U state and the button is pushed and the price of tea in china is above one dollar, then H or T will appear with probability 50/50

but

* If the box is in the U state and the button is pushed and the price of tea in china is less than one dollar, then H or T will appear with probability 90/10

(and suppose that the event to which "the price of tea in china" refers is spacelike separated from the pushing of the button)

Now, I don't think anybody can deny that this theory is irreducibly stochastic. I am *specifically denying* that there is any hidden variable which "really determines" whether H or T appears. It's just random. Irreducibly random. Yet this *theory* says that the *probabilities* governing the randomness (which is just the kind of dynamics this theory has, since it isn't deterministic) *depend on spacelike-separated goings-on*. I say that makes it a non-local stochastic theory.

Do you disagree with that?


Exactly ! Irreducible randomness, to me, is something akin of "the will of the gods". This, as compared to "randomness induced by lack of knowledge".

I have no problem with irreducible randomness. I swear, I really don't. I'm not trying to swindle you. I *get* what it means. I get that there's a difference between a *really* stochastic theory (like OQM) and a theory in which we speak of probabilities for purely epistemological reasons (though the theory itself is completely determinstic... as happens in classical stat mech). So, sure, "the will of the gods". Fine. My only point is: it still makes sense to ask whether the gods take into consideration space-like-separated information when the exert their arbitrary/inexplicable/random/stochastic will at some event.



It is only when you want to deny the irreducible stochastical character of the probabilities calculated thanks to the formalism of quantum theory, and when you try to think of a MECHANISM (involving microscopic beables) that you run into problems. And when you consider the "projection of the wavefunction" as something physically happening, you have a bluntly non-local process of course.

I think you are blurring two different issues. Merely positing a mechanism involving microscopic beables does not in any way settle the question of determinism. It's possible to posit a mechanism involving micro-beables which is deterministic, sure. It's also possible to posit a mechanism involving micro-beables which *isn't* deterministic, i.e., which is irreducibly stochastic.

Here's how I want to slice it up. First issue: are you positing a theory, or not? A theory is simply some postulated mechanism for something. For all the kinds of examples we care about in this thread, I assume such a thing would involve microscopic beables, but whatever. The point is, if your "thing" doesn't posit any beables (micro or otherwise) and/or any mechanism for something, then your "thing" isn't a theory. Indeed, it's not even *about* anything.

Now once you've got a theory, you can ask: are its dynamical equations deterministic, or not? This is a perfectly distinct question -- though of course you will get yourself very confused if you try to ask this question when you haven't yet got a theory.

Finally: is the theory *local*? This is also a perfectly distinct question from the above two, though, again, you'll get yourself awfully confused if you take something that isn't a theory (say, your preference for vanilla over chocolate) and start asking questions like "is it local"? Such questions of course have no answer.

Hmmmm. Having written all this, I guess I hope you'll just ignore all of it except the bit about the H/T box and the price of tea in china. I think, if we really disagree about something (and aren't just talking past one another) it will emerge clearly from consideration of that example.
 
  • #187
Hurkyl said:
]
I'm willing to grant the other aspects of Bell Locality, just not the assumption of statistical independence...

... or whatever Bell happened to be assuming that is equivalent to assuming statistical independence.

All right. Be sure to let me know when you figure out exactly what that was.


In one of the papers you linked before, Bell Locality was formulated in terms of three postulates: parameter independence, statistical independence (I believe it was called "observation independence"), and something else I can't remember.

Sounds like you're really on top of it after all.



Let us start with special relativity, but also add in some additional postulates:
(1) There exist objects called "magic coins", and they come in pairs.
(2) Magic coins can be in one of three states (in addition to whatever SR says): U, H, or T.
(3) Any two pairs of magic coins are otherwise identical.
(4) There is some sort of interaction called "flipping" that can be triggered in a laboratory setting that causes a magic coin in the U state to nondeterministically transition into the H or T state.
(5) Otherwise, magic coins do not change their state.
(6) The flipping interaction for each pair of magic coins is governed by the joint probability distribution P(HH) = P(TT) = 1/2, P(HT) = P(TH) = 0. (And the distribution over all pairs of coins factors into those of the individual pairs)

In regard to (6), what if only one (or neither) of the boxes has its button pushed? You seem to be assuming that they both get pushed when you say P(HH) = P(TT) = 1/2, etc. But won't you allow that whether the buttons get pushed is a free choice that (say) two separated humans get to make?


(A) The theory is nondeterministic. There is nothing to determine whether the coin undergoes U-->T or U-->H.

I don't have (and never had) any problem with that.


(B) The probabilities are understood in the frequentist interpretation: we say the probability of an event E is p when the ratio of the number of times event E occurs over the number of (identical) experiments approaches p as the number of experiments goes to infinity.

This contradicts what you said just above. If *all you mean* by the statements about probability is that about half of a large collection of results will be H (the other half T), then on what grounds do you claim the theory is not deterministic? A deterministic theory (with "hidden variables") could be equally well consistent with (B). Maybe your point is that you are just *stipulating* (A). Like I said, I have no problem with that. But then (B) really makes no sense. (A) *commits* you do a stronger meaning for the probabilities ("propensities" or whatever).


In particular, this means probabilities represent nothing more than asymptotic behavior: it is entirely nonsensical to try and use them to describe an individual experiment.

But this is *precisely* what a non-deterministic theory *does do*. It says: even at the level of a single individual experiment, there is an irreducible 50/50 chance (or whatever) for the two outcomes. Of course it can't tell you which outcome will appear in any particular case -- that's just the whole point of it being stochastic. But to say that one has genuine non-determinism *is* to say something about an individual event -- namely, it is to deny that there exist any "hidden variables" which determine the outcome of that event.



It does not follow from the axioms of the theory that it is impossible for Bob to see H and Alice to see T. (But the probability of it happening is zero)

Huh? What is the meaning of "impossible" other than "the probability of it happening is zero"?



This theory claims to be complete and local in the respect that everything that can be determined can be entirely determined with the local beables.

Do the "local beables" for Alice's experiment include (a) whether Bob chose to push his button and (b) the outcome of Bob's experiment?

Please also see the example (from my other earlier post today) about the H/T box in which the probabilities depend on the price of tea in china. That's really what's at issue here.



For example, the frequentist definition of probability requires examining a hypothetical infinite sequence of similar experiments. But we have problems such as:
(1) The limiting ratio may not be well defined.
(2) The sequence of events is hypothetical, and cannot be physical.
(3) We don't know how to classify experiments as similar! (Well, we do know one way, but then we'd never see probabilities other than 0% or 100%)

This is all completely irrelevant. It's possible to have a physics theory that is deterministic, and it's also possible to have one that includes irreducible randomness. If one is worried about it, one can prove that in the latter case the probabilities (or if you prefer, call them propensities) asisgned by the theory map correctly onto frequentist probabilities. But who cares about any of that? We are all perfectly happy to just accept the possibility of stochastic theories.

The *real* question at issue here is: what does it mean for such a theory to be *local*?

Worrying about relative frequencies for lots of repeated trials is never going to address this.



I do prefer a view that's MWI-like, although I do not know if it leads to MWI. I don't like a nondeterministic theory -- I would prefer to make a statistical theory, and take statistics seriously.

I have no idea what you mean by "a statistical theory" if this is supposed to be neither nondeterministic nor deterministic. If "statistical" means deterministic, then we have been seriously talking past one another lately, which I guess would be good to realize later than never.


In this interpretation, when we say something is a random variable, we do not mean that it is something that can acquire an outcome! That means that it is something that assigns nonnegative real numbers to the possibles outcomes that add up to one.

Yes, I understand all this. But you seem to be missing some crucial aspects of the *physics* problem. For example, experiments actually *do* (just as a sheer matter of empirical fact) have particular outcomes. So then either that particular outcome was determined from prior beables, or there was some element of irreducible randomness. It's 100% either-or. You can't just float "statistical" as an alternative by being vague about whether the underlying dynamics are or aren't deterministic.




Other interpretations suffer from two very difficult philosophical problems:
(1) They are nondeterminsitic. They assert that there is no reason the measurement turns out the way it did... it just did. But at least we have this probability distribution that describes the results!

"we have this probability distribution"... FROM WHERE? If from experiment, sure, no problem, but don't mistake that kind of empirical data for a *theory*. And if you are getting the probabilty distribution from some proposed underlying theory, then it should be possible to ask things like: is the theory deterministic? is the theory local?

You seem to just want to make an end run around all of this by (as I've thought all along) never positing a theory, and just talking about statistics. I don't object to your doing that. But I do object to your claiming to be doing something other than that, while simply doing that.



But my interpretation solves both problems.

(1) It is a deterministic theory of random variables.

What!? Your theory is *deterministic*? What in bloody %&@# have we been arguing about then?



(2) Probabilities are fundamental elements of reality -- so we don't have to use the frequentist definition to talk about probabilities.

What happened to your theory being (merely 2 seconds ago) deterministic?

You have completely lost me.



But, if you insist that it's too radical of an approach, we can stick with nondeterminism, and assert that quantum mechanics without the collapse postulate is capable of describing everything that can be described -- it has a unitarily evolving "state of the universe", and the only other things that can possibly be described are the probabilities involving the outcomes of measurements. But, as you remember, the classical usage of probabilities are statements about asymptotic behavior, and not statements about individual events.

OK, now I'm starting to think you're drunk or something. :yuck: QM without the collapse postulate is *deterministic*, so why the heck do you describe this as a way of "sticking with nondeterminism"? And here's an important fact that QM w/o the collapse postulate is (contrary to your statement) completely *incapable* of describing: that needles on measuring apparatuses in labs always point in particular directions (that cats are always definitely alive or dead, etc...). You talk about "the probabilities involving the outcomes of measurements"... but the whole crazy thing about QM w/o collapse is that measurements *don't have outcomes anymore*! That's what MWI is all about!

Ugh. I thought we were just disagreeing about some one little thing, but now it seems (not surprisingly, in retrospect) that there are huge major gulfs of confusion (about the meaning of "determinism", etc...) between us. Maybe it's not even worth pursuing. Tell me what you think of the H/T-price-of-tea-in-china example; I'd like to hear your thoughts on whether my example theory is or isn't local. But I don't have the energy (or time) to start over from scratch and figure out what "determinism" means, etc...
 
  • #188
(If you just want my answer to the price of tea in China bit, then skip to the very end)

You seem to be assuming that they both get pushed when you say P(HH) = P(TT) = 1/2, etc.
No -- by the definition of probability (which is frequentist in this theory), I say that:

The number of pairs of flipped magic coins that became HH
--------------------------------------------------------
The number of pairs of flipped magic coins

approaches 1/2 as time goes to infinity (according to any reference frame).


But you do bring up a good point -- it would not be necessary for

P(H) = 1/2

to hold, if there were enough pairs of magic coins that are only half-flipped as time goes to infinity.


This contradicts what you said just above. If *all you mean* by the statements about probability is that about half of a large collection of results will be H (the other half T), then on what grounds do you claim the theory is not deterministic?
On the grounds that there is nothing in the theory to determine them. :tongue: And, of course, in axiom (4) where the flipping interaction was explicitly postulated to be nondeterministic.


(A) *commits* you do a stronger meaning for the probabilities ("propensities" or whatever).
Does not. Nondeterministic simply means not deterministic.

And even in a theory where probabilities have a stronger meaning, that doesn't suddenly mean you cannot speak about frequentist probabilities.

But this is *precisely* what a non-deterministic theory *does do*.
No, it's one possible way to formulate a nondeterministic theory. :tongue:

And even if you do have a nondeterministic theory that uses probabilities to describe individual events, you have a problem: you still need some way to connect these "fundamental" probabilities to observation.

I rememer a Dilbert strip where he was visiting accounting, and he was introduced to their random number generator -- here are the numbers it generated while Dilbert was there:

nine... nine... nine... nine... nine... nine...

Dilbert commented, "Not very random, is it?", to which the guide responded, "That's the problem with randomness; you can never know."

It's all well and good to say that the event of "spitting out a number" is random, and getting "nine" has a mere 10% chance, but it's not a very useful quantitiy if it doesn't have any effect on what is observed!


At the moment, the only way of which I know to connect fundamental probabilities to observation is by asserting a relationship between the fundamental probabilities and frequentist probabilities.


Huh? What is the meaning of "impossible" other than "the probability of it happening is zero"?
I would go with "not possible". :tongue:

Suppose you had a coin in your laboratory. The first time you flipped it, you got heads. However, every other time you flip it, you get tails. Then, you get the frequentist probability:

[tex]
P(H) = \lim_{n \rightarrow +\infty} \frac{1}{n} = 0
[/tex]


If one is worried about it, one can prove that in the latter case the probabilities (or if you prefer, call them propensities) asisgned by the theory map correctly onto frequentist probabilities.
That would be a neat trick -- I'd be very interested how you could get from postulating the physical existence of a collection of numbers that always add up to 1 to the conclusion that those numbers agree with the frequentist probabilities.

Especially because I can do nefarious things like rearrange the numbers and tweak the dynamics to agree with my numbers. My rearranged theory will be identical to yours, except for the fact that it predicts different probabilities.


But who cares about any of that?
Presumably someone who would want to prove things. In particular:

The *real* question at issue here is: what does it mean for such a theory to be *local*?
It's hard to know what it means for a theory to be local if we don't know what the theory is. :tongue:


You have this boxed view of what a statistical theory is: there are these little boxes floating around space-time, and they spit out random numbers, and these random numbers happen to affect things like instrument readouts. But there are two problems:

(1) If you never think about what other sorts of theories there might be, then how can you prove anything about them?

(2) If you don't really know what's going on foundationally with the theories you do consider, then how can you prove anything about them?


Worrying about relative frequencies for lots of repeated trials is never going to address this.
Okay, but completely ignoring what "probability" is is going to make it awfully hard to prove anything about anything that uses them. :tongue:


I have no idea what you mean by "a statistical theory" if this is supposed to be neither nondeterministic nor deterministic.
That's why I elaborated in the next sentence. :tongue:

For example, experiments actually *do* (just as a sheer matter of empirical fact) have particular outcomes.
Which is probably why I mentioned this very issue in my post. (In an unquoted portion. :tongue:

And, I know full well that you're aware that this issue can be successfully answered by theories in which experiments do not have outcomes -- it's just that you don't like the answer.


What!? Your theory is *deterministic*? What in bloody %&@# have we been arguing about then?
I thought it was clear that this section was not talking about the toy theory with magic coins. Sorry about that.


What happened to your theory being (merely 2 seconds ago) deterministic?
It is. The fundamental elements of reality are random variables (or at least things that can produce random variables). They evolve deterministically. Observables are just more random variables. They are completely determined by the fundamental elements of reality, and thus the interpretation is deterministic.

What is lacking is the assumption that an observable "collapses" to an outcome, which would make the interpretation nondeterministic.


I do think I was incorrect in saying we don't have to talk about frequentist probabilities, for the same reasions I mentioned earlier in my post.


QM without the collapse postulate is *deterministic*
The evolution of the wavefunction is deterministic. The actual outcomes of observation are not. Thus nondeterminism.

(Of course, if you drop the assumption that observations yield outcomes, instead of merely being random variables, or having expectation values, then we don't have nondeterminism)


Tell me what you think of the H/T-price-of-tea-in-china example;
It depends on a lot of little details.

But if you think that:
(1) The probabilities are beables.
(2) The relationship with the price of tea China is not a conditional probability (which would be a beable, or determined by beables living in the region of space consisting of China and the area around the coin), but instead an actual change in P(H).

and I think you do, then it would be clear that the theory is nonlocal.

But I find neither (1) nor (2) a necessary assumption for a theory.
 
Last edited:
  • #189
Hurkyl said:
And, I know full well that you're aware that this issue can be successfully answered by theories in which experiments do not have outcomes -- it's just that you don't like the answer.

I think here you're referring to MWI. It's true, I don't like it. I've hardly tried to hide that fact, and I've explained at great length *why* I don't like it. But there's something else I dislike much more: the attempt to hide/disguise what one is really doing. It seems like you are just assuming that MWI is true (and shrugging off or obfuscating the point that that means experiments don't have actual outcomes) in an attempt to make your ideas seem more plausible than they are.



The evolution of the wavefunction is deterministic. The actual outcomes of observation are not. Thus nondeterminism.

You're missing the main point: if you get rid of the collapse postulate, there is *just* the deterministic evolution of the wave function. There *are* no "actual outcomes of observation" (in the normal/observed sense). In this theory, there just *isn't* the kind of thing you say would spoil the determinism. Of course, you can put it back in and then have "actual outcomes" and then, yes, the theory is back to being non-deterministic. But what you put back in is precisely the collapse postulate, so you're no longer talking about the same theory. Ugh.


(Of course, if you drop the assumption that observations yield outcomes, instead of merely being random variables, or having expectation values, then we don't have nondeterminism)

"the assumption that observations yield outcomes" is *not* a different assumption from the collapse postulate. They're (basically, FAPP) the same thing (in this context). The collapse postulate is postulated precisely to make the theory predict definite outcomes. You seem to think that one can dump the collapse postulate and keep the deterministic dynamics and then, oh by the way, just add in the assumption that experiments have definite outcomes. You can't do that. Because that assumption then *contradicts* what you just postulated as the fundamental dynamics of the theory.

Anyway, re: the H/T box with the tea and whether my proposed theory is nonlocal you said:

It depends on a lot of little details.

But if you think that:
(1) The probabilities are beables.
(2) The relationship with the price of tea China is not a conditional probability (which would be a beable, or determined by beables living in the region of space consisting of China and the area around the coin), but instead an actual change in P(H).

and I think you do, then it would be clear that the theory is nonlocal.

But I find neither (1) nor (2) a necessary assumption for a theory.

I don't understand what you're getting at with either (1) or (2). I don't know exactly what it means for the probabilities to be beables. Does this mean that probabilities themselves (10%, 90%, 50%, etc.) are beables? Or that the outcomes governed by those probabilities are beables? Or what? But don't just answer those questions; tell me why you think any of this matters. I thought my initial description was entirely precise: the theory is irreducibly stochastic, meaning there is no beable which makes it come out H rather than T on some particular run.

In regard to (2), surely the price of tea in china is a beable. I don't know what you're getting at with the distinction between the dependence of P(H) being a conditional probability, and its being... something else. How does any of this matter? Again, what was unclear about the original description? P(H) = 50% if the price of tea in china is above a buck, and P(H) = 90% if the price of tea in china is below a buck (or whatever). All the details about exactly how you mathematically describe what kind of dependence this is, seems like obfuscation to me. Or maybe I just don't get it.

I am at least pleased that you seem to agree with me that (under some set of restrictive conditions that I don't yet understand) the theory I posited is nonlocal. Bell would have thought so too of course (since this theory was deliberately posed as a simple example of a theory that violates Bell Locality). I wonder what vanesch will say? =)
 
  • #190
I think here you're referring to MWI
...
But there's something else I dislike much more: the attempt to hide/disguise what one is really doing.
The thing is that I don't know if my ideas lead to MWI or not -- I had originally thought they did, but am now unsure. (This change occurred before this thread)

I am trying to indicate the reasoning that led me down this path -- that's why I talk about statistical foundations a lot. The way I currently prefer to interpret things is based entirely on taking the statement "The outcome of a measurement is a random variable" literally.

In statistics, if "X" is a random variable, then expressions like "X = 3" only have any sort of meaning when wrapped inside of a "P( ... )" construction. By analogy, if "S" is an observable, then I prefer to interpret expressions like "S = up" as only having any sort of meaning when wrapped inside of a "P( ... )" construction.

I do recognize that this is quite similar to MWI -- that's why I originally thought I was a MWI'er. But I'm just no longer sure.


You're missing the main point: if you get rid of the collapse postulate, there is *just* the deterministic evolution of the wave function. There *are* no "actual outcomes of observation" (in the normal/observed sense). In this theory, there just *isn't* the kind of thing you say would spoil the determinism. Of course, you can put it back in and then have "actual outcomes" and then, yes, the theory is back to being non-deterministic. But what you put back in is precisely the collapse postulate, so you're no longer talking about the same theory. Ugh.
But I can just put back in "observations have definite outcomes" without using the collapse postulate, if you do it in the manner I've been trying to suggest the past few pages.

I will name this the "deterministic evolution with nondeterministic definite outcomes" (DENDO), to make it easier to discuss.

In DENDO, we have a wavefunction with determinstic evolution. Measurements have definite outcomes, but when they do, the wavefunction does not physically collapse: it continues to merrily evolve along according to its (local) unitary evolution.

In DENDO, we can compute things like P(Alice sees H), or P(Alice and Bob both see H). (Yes, wavefunction collapse could be used as a mathematical tool to compute certain probabilities, but the point is that it's not physical)

But the probabilities computed by DENDO are not associated with individual events -- they are frequentist probabilities. (Why are they frequentist probabilities, and not associated with individual events? Because it's my theory, and I say so. :tongue:)

DENDO gets all of the frequentist probabilities right, such as P(Alice and Bob sees H) = 1/2, and P(Bob sees H) = 1/2 -- which, to emphase the point, mean nothing more than if we do 2N experiments, the number of times that the event occurs tends to be near N.


What DENDO does not do is to handle statistics in the way you like to do so: it does not postulate little RNG (random number generator) boxes that float around emitting random numbers that affect the physical dynamics.

If you so desired, you could analyse DENDO in such terms, but these boxes would be nonphysical. You would indeed have a nonlocal "wavefunction collapse" when an experiment takes on an actual outcome, but the collapse would only happen to your little aphysical RNG boxes, and not to the actual physical state of the system.



I do not like DENDO, because I (currently) don't like the idea of definite outcomes. But I do like it more than orthodox quantum mechanics, because it only computes frequentist probabilities, which are the only things we can observe anyways.


I don't understand what you're getting at with either (1) or (2). I don't know exactly what it means for the probabilities to be beables.
Well, I'm trying to go along with your picture of little RNG boxes that spit out numbers... I think you would say that your RNG box actually looks at the physical beables to decide how to spit out its numbers.

Maybe the probability is a beable itself -- or maybe the probability is simply computed from the beables. I don't really care. But the point is that the beables somehow encode information about a probability distribution, and your RNG box spits out its numbers according to that probability distribution.


P(H) = 50% if the price of tea in china is above a buck, and P(H) = 90% if the price of tea in china is below a buck (or whatever).
Well, it wasn't actually phrased that way.

I suppose it's a reflexive reaction from my experiences with statistics: when I hear "If the event E occurs, then the probability of event F is p", I immediately translate that into P(F | E) = p.

So the point of (2) was to make explicit that I think you meant that P(H) was actually changing, and not that you were simply stating a pair of conditional probabilities.

And just ignore my parenthetical -- it was only meant to apply to the case when you really are using conditional probabilities.
 
  • #191
ttn said:
I anxiously await your producing an example of a Bell Local theory (i.e., a relativistically-locally causal theory) that agrees with experiment.

Bell Local Theory

The speed of light is constant and invariant relative to the inertial state of the observer.

Let us consider two spatial positions A and B which are fixed on an inertial reference frame. Let A and B lie on the x-axis and be positioned at x = 0 and x = X respectively. On a standard space-time diagram A and B appear as a pair of lines parallel with the time axis.

Now let a beam of light leave A at t = 0. The beam will arrive at B at a time T = X/C. If we suppress the y and z dimensions we now have defined relative to our inertial reference frame two events on positions A and B which are (0, 0) and (X, X/C) respectively.

Now let us determine the “subjective” time experienced by our beam of light whilst traveling from A to B. This we can calculate from Minkowski’s quadratic formulation of the Lorentz transformation. S is invariant and is called the proper interval.

(Delta S)^2 = C^2(Delta t)^2 - (Delta x)^2

(Delta S)^2 = C^2(X/C-0)^2 – (X-0)^2

Whish gives: -

S = 0

The proper time interval experienced by a “photon” passing from A to B has zero magnitude the photon does not experience the passage of time in going from A to B.

It is this result that we will use to show that a world characterised by special relativity is consistent with the violation of Bell’s inequality.

Our commonsense tells us that physical states at A and B are independent of each other and an event at A can only affect events at B if the influence is carried by an intermediary entity; a wave or a particle. It also tells the photon not experiencing the passage of time is down to its inertial condition.

To show the world is Bell local we must abandon our intuitive and commonsense notions of space and time and rely on the simple mathematics of the result.

The idea of super-positioning in wave-mechanics is commonly accepted, but the idea of relativistic super-positioning of the physical states of locations that are separated from each other by distance and time is probably new but can be logically deduced from the above result.

The idea is simple this: -

The proper interval between events (0, 0) and (X, X/C) has zero magnitude regardless of whether or not anything is passing along the path (0, 0) to(X, X/C). In other words the state of the world at (0, 0) and that at (X, X/C) are not separated in space-time. It only takes one more step to infer that the Lorentz Transformation super-positions the physical states of the locations (0, 0) and (X, X/C).

If you accept the validity of Lorentz super-positioning the then the consequences for our understanding of quantum behaviour cannot be over exaggerated, it literally transforms the way we perceive the cosmos. It will be sweet irony if as a result of the violation of Bell’s inequality that Special Relativity becomes a central pillar in our understanding of what is going on in the quantum world.

Some of the consequences we can immediately identify.

Lorentz super-positioned quantum systems can interact directly. There is no requirement for particles to carry the electromagnetic force. By removing the photon we immediately simplify our view of the universe. The light cone becomes a cone of super-positioning, forming an infinity of connectivity between an individual quantum system and the rest of the cosmos.

With a little more deduction we can develop an insight into the relationship between relativity and uncertainty, understand how probability waves functions develop and how light interactions form interference patterns. But our immediate objective is to show that Lorentz super-positioning of quantum states is consistent with the violation of Bell’s in equality.

Note the following argument is specific to correlation experiments involving light, for experiments involving Fermions a somewhat different approach is necessary.

First let’s make it absolutely clear this post recognises that the evidence for the violation of Bell’s inequality is over whelming and considers it a done deal! My argument is not with outcome of correlation experiments but with the interpretation of their meaning for the nature of locality.

The superficial argument is that some kind of super-luminal interaction maintains intimate communication between “the particles” as they fly apart so if one is measured it affects the other. This, it is argued by some, contradicts special relativity and therefore special relativity is in trouble! There are two possibilities; one there is a fundamental problem with special relativity and it is capable of explaining the correlations or Special relativity is correct but requires further development to explain the violation of Bell’s inequality. In other words when Einstein claimed quantum mechanics was incomplete, he’d got the wrong theory! It was Special relativity that was incomplete.

Special relativity was incomplete; in it did not recognise that the existence of the minus sign in the metric of space-time causes quantum entities to be universally super-positioned. That is for any time t on the world line of an object, the moment t of the object is super-positioned with every other object in the world where their world lines intersect the light cone(cone of super-positioning) radiating from t. (Both into the future and into the past.) This extension to special relativity would allow quantum mechanics to be developed within the frame work of relativistic super-positioning. My own view is that the weird and counter-intuitive features of quantum mechanics can be explained in terms of relativistic super-positioning including the explanation for the violation of bell’s inequality.

In the standard theory when a quantum system becomes excited, it returns to its ground state by dumping its energy of excitation, in the form of a photon, into free space. In the RSP version of things there is no requirement for a photon, since all quantum entities are super-positioned, instead the excited quantum system becomes sensitive to the states of other systems on its cone of super-positioning. It will literally search out the light cone until it finds a suitable absorber system. When the donor recognises a system that is in a quantum state that can absorb its energy of excitation; it will, because of their super-positioning, donate the energy directly into the absorber system without the need of a mediating particle.

In this transaction ( See Crammer) let the donor system S1 and absorber system S2 be separated by a distance X relative to an inertial reference frame at rest relative to S1.
For the transaction to take place the quantum states of S1 and S2 must be mutually amenable for energy to pass between them. When the two systems interact the proper interval of time separating them has zero magnitude but the time difference on our reference frame is X/C.

So the ability of S1 to exchange energy with S2 depends on the state of S2 at a time X/C into the future of S1 relative to our reference frame.

Now let us consider a quantum system S1 that cascades and donates two quanta of energy in order to lower its energy of excitation. The donor system therefore needs to find two absorber systems to accept its energies of excitation. Because of universal relativistic super-positioning along its light cone S1 can sense the states of other quantum systems. In order for the cascade to be triggered, at some point on S1’s world-line, say time t1, S1 must find two absorber systems S2 and S3 that are simultaneously capable of absorbing its energy of excitation. The light cone radiating from t1 must intersect the two systems when their quantum states are amenable to the absorption of the two quanta of energy held by S1. Let the interactions be configured such that S2 is positioned a distance X1 from S1 and S3 at a distance X2. The timing of S1 donating its energy of excitation is determined by two events: -

1. S2 becoming amenable to receiving a quanta of energy from S1
2. S3 becoming amenable to receiving a quanta of energy from S1

Relative to a space-time diagram these events occur in S1’s future at t1 +X1/C and t1 + X2/C. The proper time intervals between all three events have zero magnitude.
This is an important result and is critically important in explaining how Bell’s inequality is violated.

Now let's look at what happens during a correlation experiment:

In an Aspect’s type experiment let the left hand switch be positioned a distance X1 from the source, the polarisers at X2 and the detectors at X3. Similarly the right hand side components are placed at X4, X5 and X6, respectively.

Therefore the state of the source at a time T0, will be super-positioned with the components of the experiment at T0 +X1/C, T0 +X2/C, T0+X3/C, T0 +X4/C, T0 +X5/C and T0+X6/C relative to our frame of reference. Thus an excited calcium atom at T0 will be “sensitive” to the experimental configuration and quantum states of the systems at the future times defined above.

Assuming that the donation event as seen by the source occurs at T0 then the switching occurs somewhere between T0 and T0 +X1/C and/or T0 +X4/C. The source at T0 is super-positioned with the switches at T0 and T4. The switching occurs before the super-positioned state at interaction is achieved therefore the earlier configuration can have no influence on the outcome of the counts. Immediately before the interaction the source is super-positioned with the detectors at times T3 and T6 and with the polarisers at T2 and T5. At the instance T0 a calcium atom in the source will have sensed the presence of suitable absorber systems to accept its energies of excitation. The absorbers will either be in the polarisers or the detectors depending on the “orientation “of the donor system relative to the polarisers. Whether or not the absorbers are found in the polarisers or the detectors depends on how the calcium atom at T0 is aligned with the polarisers at T2 and T5 and the relative probabilities of a count on either side of the experiment will depend on the alignment of the polarisers relative to each other. What is important is to recognise is that absorber systems on either side of the experiment immediately before acceptance are super-positioned with the donor atom at T0. The same event and same quantum states with the same system orientation! The correlations will be dependent only on the relative setting of the polarisers at T2 and T5 respectively and not on the states of any fictitious properties of particles supposedly mediating the electromagnetic force. Hence there are no super-luminal spooky forces, changing the condition of the “photons” during flight when we alter the angle of a polariser.

Personally I believe the relativistic super-positioning can explain many of the weird and counter-intuitive aspects of QM such as the principle of uncertainty, the nature of the wave-function and interference.

I suspect sometime in the future quantum mechanics will be seen as the child of special relativity.
 
Last edited:
  • #192
Hurkyl said:
In statistics, if "X" is a random variable, then expressions like "X = 3" only have any sort of meaning when wrapped inside of a "P( ... )" construction. By analogy, if "S" is an observable, then I prefer to interpret expressions like "S = up" as only having any sort of meaning when wrapped inside of a "P( ... )" construction.

You do realize, though, that in the real world actual things actually happen, right? There are facts of the matter about how experiments come out (at least to whatever extent our direct empirical experience is veridical). So it does absolutely make sense to say something like "X = 3" without couching it in any stochastic/probabilistic form.



In DENDO, we have a wavefunction with determinstic evolution. Measurements have definite outcomes, but when they do, the wavefunction does not physically collapse: it continues to merrily evolve along according to its (local) unitary evolution.

So... take an electron in the |+x> spin state. Measure the z-component of its spin. What happens exactly? What is the state of the particle (and measuring device and human observer) after the experiment? And then crucially: what happens if the same measurement (on the same particle) is repeated?


DENDO gets all of the frequentist probabilities right, such as P(Alice and Bob sees H) = 1/2, and P(Bob sees H) = 1/2 -- which, to emphase the point, mean nothing more than if we do 2N experiments, the number of times that the event occurs tends to be near N.

I'm sorry, but just saying "it gets all the frequentist probabilities right" doesn't make it so. It's really far from clear that your theory can even predict that measurements have definite outcomes -- which is obviously a prerequisite you have to satisfy before making a specific claim about the relative frequencies of those outcomes.


I do not like DENDO, because I (currently) don't like the idea of definite outcomes.

Well I don't like the idea that I can't flap my wings and fly to Jupiter. But it's too bad for both of us, because the facts are what they are.


I suppose it's a reflexive reaction from my experiences with statistics: when I hear "If the event E occurs, then the probability of event F is p", I immediately translate that into P(F | E) = p.

I have no objection to phrasing it that way. I just don't think it changes anything. Do you? Does the posited theory fail to be nonlocal just because you state it in terms of conditional probabilities?
 
  • #193
UglyDuckling said:
(Delta S)^2 = C^2(Delta t)^2 - (Delta x)^2

(Delta S)^2 = C^2(X/C-0)^2 – (X-0)^2

I'll go you one better: consider two events A and B for which delta x is 4 light years, and delta t is 1 year. Then

(Delta S)^2 = c^2 (Delta t) ^2 - (Delta x)^2

= -3 light years

So not only is the real distance between the events not positive, it's less than zero. So there's no problem at all explaining how information could get from one to the other. It takes less than no time at all!

(This is meant as a parody of your argument, btw, not a serious claim.)




Our commonsense tells us that physical states at A and B are independent of each other and an event at A can only affect events at B if the influence is carried by an intermediary entity; a wave or a particle. It also tells the photon not experiencing the passage of time is down to its inertial condition.[/quote

In my example, the photon experiences not only no passage of time; it goes backwards in time (or maybe into imaginary time). Wow, who would have thought explaining causal links between space-like separated events would be so easy, and so consistent with relativity after all!



The proper interval between events (0, 0) and (X, X/C) has zero magnitude regardless of whether or not anything is passing along the path (0, 0) to(X, X/C). In other words the state of the world at (0, 0) and that at (X, X/C) are not separated in space-time.

...and so they're really at the same place? ...and so there's really no mystery about how they can affect each other?

Hint: space-time is not space.


In regard to Cramer's "transactional interpretation", see the devastating criticisms in Tim Maudlin's (excellent) book "Quantum Nonlocality and relativity".
 
  • #194
ttn said:
I'll go you one better: consider two events A and B for which delta x is 4 light years, and delta t is 1 year. Then

(Delta S)^2 = c^2 (Delta t) ^2 - (Delta x)^2

= -3 light years

So not only is the real distance between the events not positive, it's less than zero. So there's no problem at all explaining how information could get from one to the other. It takes less than no time at all!

(This is meant as a parody of your argument, btw, not a serious claim.)

Thanks for this amusing parody.
However your challenge was for a Bell local theory that agrees with experiment.
The particle in your argument is superluminal therefore violates the constraints of special relativity. Therefore your case cannot be Bell local and I know of no experiment which demonstrates the possibility of “physical objects” exceeding the speed of light.
On the other hand the original argument for “relativistic super-positioning” is formulated within the constraints of special relativity and shows that for light correlation experiments the correlation index will be dependent on the setting of angle between the polarisers in each branch of the experiment.(at specific times during the running of the experiment).
I cannot conceive of any mathematical argument or experimental result that will refute the argument. The main restraint to acceptance is that it is at variance with our deep seated intuitive perception of world. (The one that has allowed for our survival and enabled us to become the Earth’s dominate species.) I look forward to a rational counter argument not one based on intuition.



ttn said:
In my example, the photon experiences not only no passage of time; it goes backwards in time (or maybe into imaginary time). Wow, who would have thought explaining causal links between space-like separated events would be so easy, and so consistent with relativity after all!

The time taken by your particle to complete its journey is 3.872983i years (Correcting the error in the arithmetic.). In special relativity this has a specific meaning; imaginary time is distance and imaginary distance is time. The proper interval between events A and B is therefore 3.872983 light years. In others words the two events are separated by a time-like interval and special relativity precludes any direct communication between the two events. There is no inertial frame of reference against which the separation of A and B can be represented in the dimensions of time.
There is absolutely no way the parody the can be consistent with relativity.



ttn said:
...and so they're really at the same place? ...and so there's really no mystery about how they can affect each other?

The idea that objects can be in more than one place at once in quantum mechanics comes with the territory and is probably the principle mystery surrounding the discipline. As Feynman pointed out this is best illustrated by the double slit experiment; where the probability distribution for where a “particle” is likely to hit the screen is influenced by the presence of both slits.

The idea of looking to special relativity for an explanation may seem absurd but I think this comes from the full implications of the Lorentz transformation not being generally recognised.

If you take any event on a standard space-time diagram, let’s call it event A, and construct a light-cone from that event (future or past doesn’t matter?). All locations on that light cone are separated from event A by zero interval paths. With respect to the proper interval the entire light cone has collapsed to a singularity at A. If we now place a quantum entity at A we may consider the entire light cone and all that it intersects to be super-positioned with our quantum entity at event A. Conversely with respect to our space-time diagram our quantum entity (at event A) expands to fill to fill the entire light cone. The Lorentz transformation has done some thing very subtle it has removed our ability to absolutely specify the location of our “particle” for a specific time. With respect to our space-time diagram the “particle” exits on the light cone and its probability of interaction will be greatest in the vicinity of location A.

Uncertainty in the location of quantum objects is an inevitable consequence of the introduction of the Lorentz transformation into our natural philosophy.


Hint: space-time is not space.
[/QUOTE]

Hint: Space-time is not Euclidian and we should not allow our intuition to cloud our judgement.

ttn said:
In regard to Cramer's "transactional interpretation", see the devastating criticisms in Tim Maudlin's (excellent) book "Quantum Nonlocality and relativity".

Thanks I shall read it.
 
  • #195
ttn said:
But I anxiously await your producing an example of a Bell Local theory (i.e., a relativistically-locally causal theory) that agrees with experiment.


Hi ttn

I’m still waiting for a valid argument showing that direct interaction between quantum systems, at events on their world lines where their proper interval of separation has zero magnitude, cannot happen.

Your parody was inconsistent with SRT and did not address the issue of nature precluding direct communication between spatially remote quantum systems where their separation in space-time has zero magnitude.

I’m still waiting for a valid refutation of the proposition that “Lorentz super-positioning” of quantum systems is part of the process that mediates electromagnetism.

Cheers

UD
 
  • #196
UglyDuckling said:
I’m still waiting for a valid argument showing that direct interaction between quantum systems, at events on their world lines where their proper interval of separation has zero magnitude, cannot happen.

There is only one argument showing this, and it is the same argument showing that something *inside* the future light cone of an event can't causally affect the event. The argument is: there is no such thing as backwards-in-time causation.



I’m still waiting for a valid refutation of the proposition that “Lorentz super-positioning” of quantum systems is part of the process that mediates electromagnetism.

"Lorentz super-positioning" is a crazy phrase you seem to have made up. I have no idea what it means, and I assume others don't either. Indeed, based on what you seem to think this phrase means, I question whether you know what (normal, quantum-mechanical) super-positioning means -- i.e., whether you know any quantum physics in the first place.
 
  • #197
UglyDuckling said:
Hi ttn

I’m still waiting for a valid argument showing that direct interaction between quantum systems, at events on their world lines where their proper interval of separation has zero magnitude, cannot happen.

Your parody was inconsistent with SRT and did not address the issue of nature precluding direct communication between spatially remote quantum systems where their separation in space-time has zero magnitude.

You are mixing up two very different thing:

1) Being on the light cone (s=0) [itex]\sqrt{c^2t^2-x^2-y^2-z^2}[/itex]

2) Separation in space time = [itex]\sqrt{c^2t^2+x^2+y^2+z^2}[/itex]

UglyDuckling said:
I’m still waiting for a valid refutation of the proposition that “Lorentz super-positioning” of quantum systems is part of the process that mediates electromagnetism.

Cheers

UD

Following your reasoning ANY two points in the universe would have a
space-time separation of zero! Each pair of space-time points A and B
has many points C which are on the light cone of both A and B, that
is: AC = 0 and BC = 0 and thus AB = 0+0 = 0.Regards, Hans
 
Last edited:
  • #198
Hans de Vries said:
You are mixing up two very different thing:

1) Being on the light cone (s=0) [itex]\sqrt{c^2t^2-x^2-y^2-z^2}[/itex]

2) Separation in space time = [itex]\sqrt{c^2t^2+x^2+y^2+z^2}[/itex]

I don't think UD is mixing those two things up. I think he's just defining "separation in space-time" to be what you call "s" (usually called "the interval" or some such). What you called "separation in space-time" is not a normal thing, certainly not something one would find in any relativity text. It is, of course, not a frame-invariant quantity.


Following your reasoning ANY two points in the universe would have a space-time separation of zero! Each pair of space-time points A and B has many points C which are on the light cone of both A and B, that
is: AC = 0 and BC = 0 and thus AB = 0+0 = 0.

That's an interesting point. Of course, you're assuming that the "separation" between two points (AB) is equal to the sum of the separations between two sub-pairs (AC + CB). That could be denied; indeed, it's just plain false for the usual ("interval") definition of "separation in spacetime" (your "1" above, not your "2"). But that just goes to show how radically different "separation in spacetime" is from "separation in space" (where distances definitely add in the AB=AC+CB sense, or at least obey some kind of triangle inequality such that the equality works when all three distances are zero!). And I think that is the major confusion that is leading to UD's bogus reasoning. He seems to think that if two events have zero separation in spacetime, there's no separation between them, i.e., they are really "at the same place" and hence there's no problem with them causally affecting one another. But, to borrow Pauli's old phrase, that is not even wrong.
 
  • #199
ttn said:
And I think that is the major confusion that is leading to UD's bogus reasoning. He seems to think that if two events have zero separation in spacetime, there's no separation between them, i.e., they are really "at the same place" and hence there's no problem with them causally affecting one another.

That's the reason why I brought up this example:
To show how this reasoning leads to non-sensical results.Regards, Hans
 
Last edited:
  • #200
going through this thread- I find it a little disconcerting that in 2006 there are still those who try to argue against the MWI- it has always been by far the dominant interpretation of QM- which is of course the most successfully tested theory of reality in history-

and the MWI has been experimentally verified-

let me say that again- the MWI has been verified and confirmed as the correct interpretation of QM- in this lecture and experiment: http://www.quiprocone.org/Protected/deutsch_lect_2.wmv David Deutsch explains and performs an experiment which empirically demonstrates the existence of the Multiverse- some are considering this the first proof that we live in a Multiverse- [also in 2000 Deutsch and Hayden showed that there are no non-local aspects of quantum systems: http://arxiv.org/abs/quant-ph/9906007 ]this work has earned him a great deal of acclaim and funding recently- [ http://www.edge.org/3rd_culture/prize05/prize05_index.html ] and I wouldn't be at all surprised if these discoveries eventually win him the Nobel Prize


"The quantum theory of parallel universes is not the problem, it is the solution. It is not some troublesome, optional interpretation emerging from arcane theoretical considerations. It is the explanation—the only one that is tenable—of a remarkable and counter-intuitive reality"
~David Deutsch

“The MWI is trivially true!” Steven Hawking



“Political scientist" L David Raub reports a poll of 72 of the "leading
cosmologists and other quantum field theorists" about the "Many-Worlds
Interpretation" and gives the following response breakdown [T].

1) "Yes, I think MWI is true" 58%
2) "No, I don't accept MWI" 18%
3) "Maybe it's true but I'm not yet convinced" 13%
4) "I have no opinion one way or the other" 11%

Amongst the "Yes, I think MWI is true" crowd listed are Stephen Hawking
and Nobel Laureates Murray Gell-Mann and Richard Feynman. Gell-Mann and
Hawking recorded reservations with the name "many-worlds", but not with
the theory's content. Nobel Laureate Steven Weinberg is also mentioned
as a many-worlder, although the suggestion is not when the poll was
conducted, presumably before 1988 (when Feynman died). The only "No,
I don't accept MWI" named is Penrose.

The findings of this poll are in accord with other polls, that many-
worlds is most popular amongst scientists who may rather loosely be
described as string theorists or quantum gravitists/cosmologists. It
is less popular amongst the wider scientific community who mostly remain
in ignorance of it.”
http://www.anthropic-principle.com/preprints/manyworlds.html
 
Last edited by a moderator:
  • #201
setAI said:
and the MWI has been experimentally verified-


If it weren't such a deep indictment of the culture, I'd roll on the floor and laugh at this statement (which appears to come from a *philosopher*, i.e., a person whose profession is supposed to be thinking clearly about this kind of thing). No wonder the foundations of physics are so screwed up. I mean, seriously, in order to even claim consistency with experiment, MWI has to ask us to believe that we're deluded about everything we've ever thought or believed on the basis of perception (including, notably, how all the experiments have come out). So it's been "experimentally verified" to about the same extent that it's been experimentally verified that eating a 100 pound asbestos-and-uranium sandwich will cure you of cancer. Sure, all the people who ate such sandwiches have *appeared* to die of cancer shortly thereafter, but I have this theory that says those appearances are wrong, and that *really* those people are all living happy lives and *really* it's all the people who *didn't* eat those sandwiches that are dead (again, appearances to the contrary notwithstanding). You see, it's part of my theory that our eyes deceive us. And, hey look, my theory is consistent with what we appear to see -- it explains the delusionalness of our so-called perception and tells us what is really true -- so therefore it's confirmed by the data.

Again, that somebody could seriously believe all of this would normally be quite funny. I love laughing at such stupidity. But it's really not funny when such stupidity is so evidently widespread among supposedly serious people. Then it's sad and scary.
 
  • #202
Sucks teeth: I just don't buy the Many Worlds Interpretation.

http://en.wikipedia.org/wiki/Many-worlds_interpretation

Apart from the concept of multiple realities, the conceptual showstopper for me is that these many worlds must be the world. It's like when I hear somebody on TV talking about "Universes". The word Universe is of Greek origin. Think "Uni" as in unicycle, and "versa" as in vice versa. It means "turned into one". It means everything, and you just can't have more than one everything.

I don't have any answers, but I wonder if the real problem is in the word particle.
 
  • #203
LOL! as I said- it is rather disturbing that in 2006 we are still seeing this odd distate for the reality of the MWI- 10 or even 5 years ago you could still have a legitimate argument- but the advent of quantum computers ended this debate-[ quantum computers simultaneously show that the Copenhagen interpretation is false and demonstrate the physical reality of the MWI http://xxx.lanl.gov/abs/quant-ph/0104033 ]- but I guess it takes some time for the ripples to shake everyone from their obsolete paradigms- I am sure there were many scientists who continued to argue for a classical ether for a few years after the Michaelson Morley experiment

but- at the end of the day one cannot argue with OBSERVED reality- the Earth is round- evolution happens- and we live in a multiverse- you're just going to have to deal with it-

there are no philisophical or logical problems with a multiverse- only personal aesthetic ones- as D Deutsch has pointed out again and again- slowly the rest of the physics community has agreed- but there are still some out there who have a hard time with it- [I don't really understand why- a multiverse is an ontological necessity unless you posite some epicycle-like mechanism that magically prevents other universes from existing just as this one- a demon that exists just to nullify the other decoherred histories of the Schroedinger equation]

"Our best theories are not only truer than common sense, they make more sense than common sense... "

"there are indeed other, equally real, versions of you in other universes, who chose differently and are now enduring the consequences. Why do I believe this? Mainly because I believe quantum mechanics... Furthermore, the universes affect each other. Though the effects are minute, they are detectable in carefully designed experiments... When a quantum computer solves a problem by dividing it into more sub-problems than there are atoms in the universe, and then solving each sub-problem, it will PROVE to us that those sub-problems were solved somewhere - but not in our universe, for there isn't enough room here. What more do you need to persuade you that other universes exist? "

~David Deutsch
 
Last edited:
  • #204
Farsight said:
Apart from the concept of multiple realities
setAI said:
there are no philisophical or logical problems with a multiverse
AFAIK, that is not an accurate depiction of MWI.

The universe has a state. That state contains many details that are irrelevant to you. So, one applies a mathematical operation (partial trace) to the state of the universe to obtain a state containing only the information relevant to you. That is your "world". It is not some sort of alternate reality.

This isn't even a quantum idea -- we do it all the time in the classical regime. For example, when we study ballistics here on Earth, we discard the information about what's happening over in Andromeda galaxy.


But, I presume both of you are not talking about the "worlds", but the hypothesis that the universe really is in some sort of quantum state. But that also doesn't look like multiple realities, or a multiverse.

The notion of a "superposition of states" is simply an artifact of one mathematical way of representing states... and even in the linear representation, there is one choice of freedoms in which the state is not in a superposition.

E.G. in one perspective, the photon is in a superposition of spin-up and spin-down about the Z axis... but you could say the superposition is just an artifact of your perspective: it's actually in a certain spin-up state about the Y axis. (depending on the actual numbers, of course, a different axis would be the "right" one)



ttn said:
I mean, seriously, in order to even claim consistency with experiment, MWI has to ask us to believe that we're deluded about everything we've ever thought or believed on the basis of perception (including, notably, how all the experiments have come out).
So does Newtonian mechanics. :tongue: When I see things in motion, they tend to come to rest. But Newtonian mechanics says I'm deluded and things in motion tend to stay in motion, and that there's some mysterious external force that is causing things to come to rest! :rofl:

And, of course, Special Relativity. It would have me believe that when someone is walking across my room, I should think they appear thinner! Of course, it conveniently says that the difference should be to small to measure. :rolleyes:

Oh, and General Relativity. It would have me believe there is no such force as gravity! How daft can you get? :yuck:



The situation is fairly analogous to Special Relativity.

Maxwellian electrodynamics said some fairly wacky things about the universe.

Some people invented all sorts of strange physical mechanisms so that they could incorporate the emprical success of Maxwell's equations into their beloved notion of the universe.

Some other people adopted the view that Maxwell was right, adjusted their notion of the universe appropriately, and were able to explain why the universe appears the way we thought it was.

We see now which camp has won. :wink:


That's what's going on here: the quantum theory suggests quantum weirdness. Some people like to believe in some strange physical mechanism that allows them to incorporate the successes of quantum mechanics into their beloved classical notion of the universe. Others adjust their notion of the universe appropriately. MWI is one way to explain why the universe appears the way we thought it was.


But it's really not funny when such stupidity is so evidently widespread among supposedly serious people.
There's a big difference: your "abestos-and-uranium-sandwich theory" is not based upon empirically successful physics. MWI is.

The "stupidity" here is the irrational clinging to some ad-hoc physical mechanism that make one's beloved notion of the universe literally true, and refusing to even entertain the notion that such mechanisms aren't necessary.
 
  • #205
Thanks for the link, setAI, I've printed out David Deutsche's paper and will study it carefully.

hurkyl, what you're saying sounds good to me. I think.

I'll get back here when I've read that paper tonight.
 
  • #206
Hurkyl said:
So does Newtonian mechanics. :tongue: When I see things in motion, they tend to come to rest. But Newtonian mechanics says I'm deluded and things in motion tend to stay in motion, and that there's some mysterious external force that is causing things to come to rest! :rofl:

Huh? This makes it sound like you are quite clueless about Newtonian physics (which I doubt is the case). Do you really think Newton's first law says "all moving things will keep on moving no matter what"?

Anyway, there is a very clear sense in which MWI insists that we are deluded about basic apparent perceptual facts. This is in contrast to every other scientific theory that is or has been widely accepted. If this sense is not clear to you, maybe you should ask about it (or, say, read the clarifying sections of David Albert's "QM and Experience") rather than parodying it.


And, of course, Special Relativity. It would have me believe that when someone is walking across my room, I should think they appear thinner! Of course, it conveniently says that the difference should be to small to measure. :rolleyes:

In other words, what you actually see matches (within the relevant uncertainties) with what the theory says is actually happening. This is in stark contrast to MWI. According to MWI, the real state of the world does *not* have a person walking across your room, so your perception to the contrary is a *delusion*.



Maxwellian electrodynamics said some fairly wacky things about the universe.

Please. When I say that MWI requires us to accept that our perceptual experience is delusional, I'm not just saying "MWI is fairly wacky". I use language carefully and precisely. MWI is fairly wacky, yes, but that is not the point at issue here.


That's what's going on here: the quantum theory suggests quantum weirdness. Some people like to believe in some strange physical mechanism that allows them to incorporate the successes of quantum mechanics into their beloved classical notion of the universe. Others adjust their notion of the universe appropriately. MWI is one way to explain why the universe appears the way we thought it was.

The whole point I am making is that your last sentence, taken literally, is quite false. "The way we thought it was" surely includes things like the needles on experimental apparati in Germany in the 1920's swinging in particular directions, yes? Well, according to MWI, that (and a gazillion other things like it, including, as I've said, pretty much all of our perceptual experience of the world) never actually happened. In other words, it is a delusion. So is MWI "one way to explain" all the perceptual/empirical evidence that led to quantum mechanics? Literally speaking, no. It doesn't explain that evidence; it explains it away (so to speak). According to it, that evidence was all wrong.

You have to admit, that's a very uncomfortable (because circular) position for a theory to be in.



There's a big difference: your "abestos-and-uranium-sandwich theory" is not based upon empirically successful physics. MWI is.

No, it isn't. At least, not in anything like the normal scientific sense.



The "stupidity" here is the irrational clinging to some ad-hoc physical mechanism that make one's beloved notion of the universe literally true, and refusing to even entertain the notion that such mechanisms aren't necessary.

So, it's stupid to believe that when I see a table in front of me, there's really, in external physical reality, a hunk of table-shaped stuff out there? Or that when I see the needle go right, that's because, really, there is a needle and it moved to the right?

I would ask you to seriously consider what is left of science (including in particular the alleged empirical evidence for MWI) if you take this seriously.
 
  • #207
No, it isn't. At least, not in anything like the normal scientific sense.
You take an empirically successful principle (unitary evolution), and you push it to its logical conclusion -- in the theoretical domain, how can you get more scientific than that? :tongue:

To the best of my knowledge, MWI is a theory about unitary evolution. That's it. Unlike your "curative asbestos-and-uranium sandwiches" (hey, weren't you objecting to parodies? :tongue2:), MWI doesn't postulate anything new: it simply studies what follows from unitary evolution. And at any point, you could reintroduce wavefunction collapse and be doing orthodox quantum mechanics. (But, you would no longer be doing MWI)


I had previously been thinking that you meant "deluded" simply to refer to the fact we think we see a classical state, when the universe is in a quantum state. But I have absolutely no idea where you get things like:

: According to MWI, the real state of the world does *not* have a person walking across your room
: Well, according to MWI, that ... never actually happened.
: According to it, that evidence was all wrong.
 
  • #208
Hurkyl said:
You take an empirically successful principle (unitary evolution), and you push it to its logical conclusion

People disagree that it's logical. Unitarity is surely a useful property, but making it the be-all of everything, at the cost of either "parallel worlds" or the possibility that people I see on the street are in a different state as far as their furshlugginer consciousness is concerned, is not best described as "logical" IMHO. And it still doesn't answer the question, how can QM operate, as evidently it does, completely hidden from human consciousness, say inside the Sun?
 
  • #209
setAI, hurkyl: I read the David Deutsche paper and have to say I didn't understand it. So I read it again, and again, and I still couldn't follow its thrust or see anything that "proved" the MWI. If either of you could post a link to an alternative paper or article I'd be grateful.
 
  • #210
Farsight said:
... see anything that "proved" the MWI

There is nothing that proves MWI, it's something that those who believe in it try to persuade you of. In other words it's like philosophy or religion: "Go on and faith will come to you."
 

Similar threads

  • Quantum Physics
Replies
7
Views
1K
Replies
2
Views
954
Replies
7
Views
1K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
Replies
6
Views
1K
Replies
11
Views
1K
Replies
93
Views
5K
  • Quantum Interpretations and Foundations
6
Replies
175
Views
6K
  • Quantum Physics
3
Replies
87
Views
5K
Back
Top