|Apr13-06, 05:55 AM||#171|
Validity of Relativity
They better "materialize" some way, or they will have no dynamical consequences. Nobody thinks that the irreducible randomness at some point brings into existence a new (physical) scalar field, or a big set of polished bronze numerals reading "0.732752" (or whatever the random number was). But if what's being explained by the underlying stochastic theory is some kind of measurement outcome, then obviously the generated random numbers have to have a real physical effect on *something* which then in turn physically influences the macroscopic measurement devices (which *nobody*, except crazy MWI-people, denies are beables).
All of the points in the last few posts have been pointless semantic distractions. It's clear that any random numbers generated by a stochastic theory have to manifest themselves in some physical way -- otherwise they would be irrelevant to empirical observations and there's be no point at all in hypothesizing the theory in question. The only question can be: where (at what spacetime event) do such numbers arise?
To answer this question is to admit non-locality (in the kind of examples we've been discussing). If Bob makes the "first" measurement and there is something random that controls his outcome, then the subsequent effect of that number (or its various causal effects near Bob) constitutes nonlocality.
And to *not* answer this question is to admit non-locality. If Bob makes the "first" measurement and this new random number comes into existence *not* at some spacetime event near Bob, but (say) simultaneously along some spacelike surface through Bob's measurement event, said popping into existence constitutes nonlocality.
|Apr13-06, 06:11 AM||#172|
I will grant, however, that if you are going to begin by throwing out the empirical data that was supposed to define this situation (Alice and Bob each see H or T w/ 50/50 probability, but the two outcomes are always paired HH or TT) then, yeah, sure unitary-only QM can explain the correlations. Just like Ptolemy's theory of the solar system can explain the last 100 days of data for the price of tea in china...
I don't know what to say. If you think the above, you simply haven't understood Bell Locality at all. The whole point of this condition is to ask: are the beables of a theory sufficient to explain certain observed facts in a local way? For your example of the irreducibly-random theory which purports to explain the HH/TT correlation, Bell Locality is violated: a complete specification of beables along some spacelike surface prior to both measurement events does *not* screen off the correlation.
Yes, one can *deduce* from this "statistical independence" -- a complete specification of beables in the past of the two events should screen off any correlations between the outcomes. But this is not an arbitrary hypothesis; it is a *consequence* of the basic requirement, which is *locality*.
Let me ask you a serious question: have you ever read Bell's papers on this stuff?
|Apr13-06, 06:22 AM||#173|
I see no reason to postulate the existence of any physical scalar fields. The point is too simple to deserve such fanciness: you could have a theory in which there is irreducible randomness (the production of some random number from some kind of probability distribution), but in which that number (whatever it turns out to be) is then "available" at other spacetime events to affect beables. And my point is simple: if it is only available at spacetime points in the future light cone, the theory is local; if it's available also outside the future light cone, the theory is nonlocal.
You say that as soon as one assigns physical existence to the random quantities, the theory becomes deterministic. I could not disagree more strongly. First, if you *don't* assign physical existence to the random quantities, what the heck is the point? They then play absolutely no role in the dynamics. And second, whether you do or don't assign physical existence to the random quantities, has no bearing whatever on whether the theory is deterministic. A theory in which there is randomness which affects things, is *not deterministic*. For example: orthodox QM (with the collapse postulate) is *not* a deterministic theory, even though there is irreducible randomness (which of the eigenstates the initial state collapses to) and the "outcome" of this "random choice" manifests itself immediately in the beables (the wave function is now that eigenstate).
|Apr13-06, 06:25 AM||#174|
Please don't tell me I've spent all this time trying to explain things to you, only to have *this* appear as your considered view.
Sure, you can explain EPR/Bell data with a theory in which the causes of certain events come from the future. Do you seriously think such a theory would be "locally causal"?
|Apr13-06, 06:28 AM||#175|
(A stochastic theory, of course, doesn't like to produce outcomes... it prefers to simply stick with a probability distribution on the outcome space)
|Apr13-06, 07:16 AM||#176|
If we accept as a given that the observed result is (say) produced by electrons landing "here" instead of "there" on the screen, then your proposed stochastic theoretical explanation of the observations better include some way for the random numbers to affect electrons. If they "do not have any sort of effect on anything" then you are just spinning your wheels, failing in principle to propose the kind of thing that could ever possibly address the issue at hand.
Really, this comes down to the old objection that one could just take the quantum mechanical formalism as a blind algorithm, which makes no claims about any beables... and hence make correct predictions without ever asserting anything that could possibly construed as violating local causality. Of *course* one can do this. One can avoid making nonlocal claims by refusing to claim anything about anything. Duh. But we *know* that big macroscopic things exist, and we *know* they're made of littler things. The question is: is it possible that the dynamics of the little stuff (or the sub-little stuff, or whatever) respects local causality? Bell gave a theorem that the answer is no: no locally causal (bell local) theory can account for what's observed.
Putting tape over your mouth and refusing to assert a theory does not constitute a counterexample to this theorem.
|Apr13-06, 07:16 AM||#177|
(|HH> + |TT>) / sqrt(2)
from which it's easy to derive the correlation. Furthermore, if we actually conduct an experiment to test if there's a correlation, unitary evolution provides us with the resulting state
(|HH> + |TT>)|correlated> / sqrt(2)
As opposed to the state we'd get when there wasn't any entanglement at all:
((|HH> + |TT>)|correlated> + (|HT> + |TH>)|uncorrelated>) / 2
Before I respond to the next part, allow me to remind you of your post #133 that launched this particular arc:
First off, notice that your first question is very circular. Filling in the implicit stuff (as I understand it), you say:
"The whole point of the Bell locality condition is to ask: are the beables of a theory sufficient to explain certain observed facts in a Bell local way?"
But you did not ask for a toy theory that was Bell local: you asked for a theory that was consistent with Lorentz invariance: with special relativity. (In fact, isn't the whole point of this thread to ask the question of consistence with special relativity?)
Bell locality is, indeed, violated, because one of its underlying assumptions is that there is no statistical dependence. By looking at all of the responses through the filter of Bell locality, you are, in fact, asking:
"Is there any theory consistent with special relativity that is capable of predicting statistical dependence, under the condition that there is no statistical dependence?"
|Apr13-06, 07:27 AM||#178|
But this has absolutely nothing to do with the idea of a random number generator you seem to be using in post #171.
You seem to have in your mind that a stochastic universe would be analogous to how a computer program will use a pseudorandom number generator to spit out a sequence of numbers, and then use those numbers to control how things dance across its screen.
But that's not how statistics works! A random variable is nothing more than a measure on a space of outcomes. In fact, it is a very difficult problem to try and give any sort of precise meaning to the word "random number generator".
(In a classical theory)
E.G. one random variable could be on the space of possible positions and momentums of an electron. Another random variable could be on the configurations of the electromagetic field. The dynamics of the theory would allow us to compute a new random variable on the space of possible positions, momentums, and accelerations of the electron.
(Of course, this is just marginalized from the joint distribution over the electron position, momentum and acceleration and the electromagnetic field configuration)
|Apr13-06, 07:51 AM||#179|
You seem to be using/assuming the MWI without being willing to admit the well-known weirdness of such a view. Sure, unitary evolution can get you that superposition, and you can twist and turn and eventually connect this up with what we do experience (by denying that what we see in front of our face is the truth, i.e., by postulating that we're all deluded about what the outcomes of the experiments actually were). But normally when a physicist asks if some theory or other "explains the data" he or she is not looking for a metaphysical-conspiracy-theory about how, really, the data we got directly from looking at an apparatus is a delusion.
"Are all the beables here sufficient to describe what's going to happen?"
But as I pointed out, this just *is* the Bell Locality condition. Maybe we're not on the same page about what the phrase "sufficient to describe what's going to happen" means. I told you what that phrase means for Bell Locality, but I don't understand what, if anything, you're proposing as an alternative. You seemed to simply reject my proposal for the meaning of that phrase on the grounds that it presupposed the statistical independence hypothesis, but that simply is not true.
|Apr13-06, 09:54 AM||#180|
And to counter your assertion ("no Bell local theory can agree with experiment"), I instead state that "no Bell realistic theory can agree with experiment". Bell realistic meaning: any theory in which there is a more complete specification of the system than the HUP allows. You cannot beat the HUP!
And please, do not bother with BM as a candidate. I am talking about a theory in which the HUP is beaten. EPR thought they had it, but experiment showed otherwise. If you can't beat the HUP, even in principle, then you are acknowledging that there are no hidden variables in the first place.
|Apr13-06, 10:23 AM||#181|
Sigh. I count at least 6 major confusions here. (1. reverse-temporal causation certainly is *not* "local in every sense of the word". 2. A theory with reverse-temporal causation, assuming such a thing could even be made well-defined, could be 100% "realistic". 3. A "local non-realistic" theory is not consistent with Bell's theorem anyway, if what you mean is what Bell meant: the full, two-part argument that no local theory, realistic or not, can agree with the empirical predictions of QM. 4. What you call my "assertion" is actually something that has been proved rigorously, unlike the vague and arbitrary statement you seem to want to "counter" me with. 5. The meaning of "You cannot beat the HUP" depends crucially on the meaning of "HUP" -- if one takes HUP as a restriction on the simultaneous *reality* of certain variables, then you are, like Bohr, just rejecting the conclusion of EPR without demonstrating any error in the argument; and if one takes HUP as merely a restriction on simultaneous *knowledge* of certain variables, then something like BM *does* count as a "candidate" since it makes the same empirical predictions as quantum theory and yet has particles following definite trajectories. 6. No experiment ever "showed otherwise", i.e., refuted the EPR argument; with the help of Bell's theorem we now know that the kind of theory EPR lobbied for is not empirically viable; but this does *not* mean that experiment has refuted the argument they used to arrive at that belief; the argument might be valid, but the *premises* false.)
This last is the most crucial. EPR believed in locality. EPR also constructed an argument for the proposition that "Locality --> Hidden Variables". Putting these together, they proposed that a local hidden variables theory should be sought to replace orthodox QM.
We now know from Bell that such a theory cannot work. Does this mean that EPR were wrong? Yes, in the sense that the kind of theory they said they thought should be sought turns out to be impossible. But does this mean that their *argument* for the statement "Locality --> Hidden Variables" was flawed? No!!!! It means only that *either* that argument was flawed, or the *other premise* ("Locality") is false.
Nobody has ever pointed out a flaw in the EPR argument (widespread opinion to the contrary notwithstanding). Indeed, the argument has been re-formulated in rigorous terms several times recently. So that leaves no choice but to blame the empirical violation of Bell's inequalities on that first premise, "Locality".
Here it is again in slow motion:
EPR: Locality --> HV's
Bell: Locality + HV's --> X
Experiment: X is false
Conclusion: Locality is false.
Of course, as this thread has certainly made clear, this conclusion is only *interesting* for those who believe that the sense of "locality" needed to make the argument go through, is something that we ought to believe in the first place based on relativity theory. There are some people who deny that (for reasons that don't make any sense to me, but whatever). My point here is just that saying "experiment refuted EPR" represents, as Bell once said about the critics of Einstein, "misunderstanding [that] could hardly be more complete."
|Apr13-06, 02:20 PM||#182|
I mean, you're thinking of one or other phenomenon which is "making the detector click" or not, and if only we knew its value, then we would KNOW in advance whether the detector would click or not. But as such, the stochasticity is still reducible. It can be "irreducible" for us if it is *in principle* unknowable, but it can still be "hidden determinisitic", in that one COULD think up a constant field over spacetime with a random value, which is then going to determine the outcomes.
And when this is potentially possible, you have Bell's condition.
But let us now take quantum theory in an extremist Copenhagen view: there are only macroscopic bodies, which follow strictly classical physics, except that in the equations of motion, we have to introduce random events. In order to know the statistical distribution of these random events, we use quantum mechanics, which is however, not supposed to even describe a microscopic world. Electrons and atoms don't exist. Just macroscopic bodies. But we "pretend" there to be microscopic objects, and wavefunctions and all that, but this is nothing but a big game, which has only one purpose: calculate the statistical distribution of the random actions on the classical dynamics of macroscopic bodies.
Well, that statistical distribution, deemed to be irreducibly stochastic (that means, "it just happens that way" and there's no underlying mechanism which causes it ; all the QM formalism is just a trick to calculate it but doesn't represent anything) is what it is.
And how do we check whether it is compatible with the geometry of spacetime ? Well, we calculate a set of probabilities of outcomes from the point of view of one observer. And then we do the same for another observer, which suffers a lorentz boost. And guess what ? They come to the same statistical predictions. It is not possible to derive a "preferred reference frame" from these statistical predictions. THIS is the one and only condition that is necessary for this theory to be COMPATIBLE with the spacetime geometry.
Whether this is to be called "local" or not is your business. "Local" really has only a strict meaning in the case of deterministic theories, where THE outcome (not the *probability* of an outcome, because probability is an epistemological concept) at an event E is DETERMINED by what's in the past lightcone of E ; and this locality is even only needed to what "we can really influence and know in the lab".
In a deterministic theory, locality is needed to avoid the "I can change the future so as not to produce what I learned about the future" paradox.
In a purely stochastic theory, what remains of this requirement to avoid a paradox is "information locality", for the same reason.
And "Bell locality" is an EXTRA REQUIREMENT one can postulate, for one's own liking, and which comes in fact down to requiring that statistical effects in a theory are derivable from an underlying deterministic, local, theory.
I will agree with you that I have difficulties with such a view too (hence my preference for MWI, where at least there IS something underlying). And of course in the more von Neuman approach, where the wavefunction is a beable, you're perfectly right that it is non-local.
But (though it is not my view) if you simply see QM as a "trick to calculate irreducible stochastical influences on a classical dynamics of macroscopic bodies" and hence deny the existence of a microscopic world, I think you have no clash per se with the Minkowski geometry of spacetime (in which only these macroscopic bodies live of course, not the non-existing microworld). The calculated probabilities do not allow you to find a specific reference frame and they also do not allow you to create a paradox (thanks to information locality).
I simply think that in such a case, "locality" has not much meaning beyond these statements.
It is only when you want to deny the irreducible stochastical character of the probabilities calculated thanks to the formalism of quantum theory, and when you try to think of a MECHANISM (involving microscopic beables) that you run into problems. And when you consider the "projection of the wavefunction" as something physically happening, you have a bluntly non-local process of course.
|Apr13-06, 04:56 PM||#183|
EPR did NOT make the above argument in their paper, they said: If QM is complete, then there cannot be simultaneous reality to non-commuting observables. And I agree with this conclusion.
What they did not know, but would have been surprised to learn, is that Aspect-like experiments would NOT yield more information on the separated particles than would be allowed by the HUP. They believed, but did not prove, that the HUP could be beaten. So far, the HUP still stands. Ultimately, that is the heart and soul of the debate.
|Apr13-06, 05:41 PM||#184|
I guess, since even Einstein thought that Podolsky's text buried the main point under pointless "erudition", Podolsky, and not you, should be blamed for your confusion over the point and content of the EPR argument.
If you think the point of *either* EPR or Bell's Theorem (or Aspect's experiments) was to actually, in practice, "yield more information ... than would be allowed by the HUP" you have completely and totally missed the whole point of this entire debate.
Sigh. (not a joke)
|Apr13-06, 07:13 PM||#185|
I'm willing to grant the other aspects of Bell Locality, just not the assumption of statistical independence...
In one of the papers you linked before, Bell Locality was formulated in terms of three postulates: parameter independence, statistical independence (I believe it was called "observation independence"), and something else I can't remember.
I'm going to rewind back to the original theory I posted, where I simply posited the existence of a pair of coins that were governed by a joint probability distribution. Upon reflection, I realize that there is a problem with this, but for reasons entirely different than what you've said. So let me make a slight modification before we continue.
Let us start with special relativity, but also add in some additional postulates:
(1) There exist objects called "magic coins", and they come in pairs.
(2) Magic coins can be in one of three states (in addition to whatever SR says): U, H, or T.
(3) Any two pairs of magic coins are otherwise identical.
(4) There is some sort of interaction called "flipping" that can be triggered in a laboratory setting that causes a magic coin in the U state to nondeterministically transition into the H or T state.
(5) Otherwise, magic coins do not change their state.
(6) The flipping interaction for each pair of magic coins is governed by the joint probability distribution P(HH) = P(TT) = 1/2, P(HT) = P(TH) = 0. (And the distribution over all pairs of coins factors into those of the individual pairs)
The important things to note are:
(A) The theory is nondeterministic. There is nothing to determine whether the coin undergoes U-->T or U-->H.
(B) The probabilities are understood in the frequentist interpretation: we say the probability of an event E is p when the ratio of the number of times event E occurs over the number of (identical) experiments approaches p as the number of experiments goes to infinity.
In particular, this means probabilities represent nothing more than asymptotic behavior: it is entirely nonsensical to try and use them to describe an individual experiment.
It follows from the axioms of the theory that:
For the experiment where we take a magic coin and flip it, we have P(H)=1/2.
For an experiment where Alice and Bob take a pair of magic coins and flip them, we have
P(Bob sees H | Alice sees H) = 1.
P(Bob sees H | Alice sees T) = 0.
It does not follow from the axioms of the theory that it is impossible for Bob to see H and Alice to see T. (But the probability of it happening is zero)
This theory claims to be complete and local in the respect that everything that can be determined can be entirely determined with the local beables.
The important thing this is trying to convey relates to the usage of probabilities. Normally, (as far as I can tell) the usage of statistics in physics is either entirely aphysical, or it is based upon a very shaky logical foundation.
For example, the frequentist definition of probability requires examining a hypothetical infinite sequence of similar experiments. But we have problems such as:
(1) The limiting ratio may not be well defined.
(2) The sequence of events is hypothetical, and cannot be physical.
(3) We don't know how to classify experiments as similar! (Well, we do know one way, but then we'd never see probabilities other than 0% or 100%)
But, at least in the above theory, we can put the frequentist definition on a rigorous footing -- since there are no factors that affect the outcome of a coin flip, it's clear that we can consider any two coin flipping experiments as similar. (And experiments involving multiple coins are similarly easy) We don't have to worry if we can define a hypothetical sequence of experiments and if the limiting ratio will be defined, because one of the axioms of the theory is that we can do so without any worries.
In this interpretation, when we say something is a random variable, we do not mean that it is something that can acquire an outcome! That means that it is something that assigns nonnegative real numbers to the possibles outcomes that add up to one.
Other interpretations suffer from two very difficult philosophical problems:
(1) They are nondeterminsitic. They assert that there is no reason the measurement turns out the way it did... it just did. But at least we have this probability distribution that describes the results!
(2) It is mysterious why and how the frequentist definition of probability manages to describe anything!
But my interpretation solves both problems.
(1) It is a deterministic theory of random variables.
(2) Probabilities are fundamental elements of reality -- so we don't have to use the frequentist definition to talk about probabilities.
It does raise the philosophical issue about why it looks like we see outcomes, but that question does have an answer.
But, if you insist that it's too radical of an approach, we can stick with nondeterminism, and assert that quantum mechanics without the collapse postulate is capable of describing everything that can be described -- it has a unitarily evolving "state of the universe", and the only other things that can possibly be described are the probabilities involving the outcomes of measurements. But, as you remember, the classical usage of probabilities are statements about asymptotic behavior, and not statements about individual events.
|Apr14-06, 11:34 AM||#186|
As I see it, there are simply two possibilities: the wave function is, or is not, a beable. If it is, then it's quite clear that the theory is nonlocal (but in agreement with experiment). If it isn't a beable, then we are obliged to calculate probabilities for empirical outcomes based on whatever *are* the beables (which I guess would have to be something else macroscopic) in which case I think it is obvious that we cannot make the correct predictions anymore (since basically there is nothing left of the theory)... in which case the question of whether the thing is local or not is completely moot.
But... and here is the fundamental thing we don't seem to be able to get on the same page about... isn't the *obvious* way to generalize this to say: for stochastic theories, it makes no sense to talk about THE outcome that is DETERMINED. The whole *meaning* of a stochastic theory is that many outcomes are in principle possible, and all the theory can do is state the PROBABILITIES for the VARIOUS POSSIBLE outcomes (based on the state of the beables in the past light cone). But then the definition of locality that works fine for deterministic theories, simply goes over in the obvious way: the PROBABILITIES for the VARIOUS POSSIBLE outcomes should be "fixed" (not the OUTCOMES should be fixed, but the PROBABILITIES for different possible outcomes should be fixed) by a complete specification of beables in the past light cone. That's it. It's perfectly OK for something irreducibly random to happen that brings about the particular event E -- but still, even in an irreducibly stochastic non-deterministic theory, we can ask: does the theory predict that the probabilities for these different possible happenings depends on stuff going on at spacelike separation? If so, the theory (though still genuinely, irreducibly stochastic) is nevertheless nonlocal.
It's like this. Take one of Hurkyl's U/H/T magic coin flipping boxes (but forget about pairs of them; there's just the one). And suppose you have some theory according to which:
* If the box is in the U state and the button is pushed and the price of tea in china is above one dollar, then H or T will appear with probability 50/50
* If the box is in the U state and the button is pushed and the price of tea in china is less than one dollar, then H or T will appear with probability 90/10
(and suppose that the event to which "the price of tea in china" refers is spacelike separated from the pushing of the button)
Now, I don't think anybody can deny that this theory is irreducibly stochastic. I am *specifically denying* that there is any hidden variable which "really determines" whether H or T appears. It's just random. Irreducibly random. Yet this *theory* says that the *probabilities* governing the randomness (which is just the kind of dynamics this theory has, since it isn't deterministic) *depend on spacelike-separated goings-on*. I say that makes it a non-local stochastic theory.
Do you disagree with that???
Here's how I want to slice it up. First issue: are you positing a theory, or not? A theory is simply some postulated mechanism for something. For all the kinds of examples we care about in this thread, I assume such a thing would involve microscopic beables, but whatever. The point is, if your "thing" doesn't posit any beables (micro or otherwise) and/or any mechanism for something, then your "thing" isn't a theory. Indeed, it's not even *about* anything.
Now once you've got a theory, you can ask: are its dynamical equations deterministic, or not? This is a perfectly distinct question -- though of course you will get yourself very confused if you try to ask this question when you haven't yet got a theory.
Finally: is the theory *local*? This is also a perfectly distinct question from the above two, though, again, you'll get yourself awfully confused if you take something that isn't a theory (say, your preference for vanilla over chocolate) and start asking questions like "is it local"? Such questions of course have no answer.
Hmmmm. Having written all this, I guess I hope you'll just ignore all of it except the bit about the H/T box and the price of tea in china. I think, if we really disagree about something (and aren't just talking past one another) it will emerge clearly from consideration of that example.
|Apr14-06, 04:57 PM||#187|
Please also see the example (from my other earlier post today) about the H/T box in which the probabilities depend on the price of tea in china. That's really what's at issue here.
The *real* question at issue here is: what does it mean for such a theory to be *local*?
Worrying about relative frequencies for lots of repeated trials is never going to address this.
You seem to just want to make an end run around all of this by (as I've thought all along) never positing a theory, and just talking about statistics. I don't object to your doing that. But I do object to your claiming to be doing something other than that, while simply doing that.
You have completely lost me.
Ugh. I thought we were just disagreeing about some one little thing, but now it seems (not surprisingly, in retrospect) that there are huge major gulfs of confusion (about the meaning of "determinism", etc....) between us. Maybe it's not even worth pursuing. Tell me what you think of the H/T-price-of-tea-in-china example; I'd like to hear your thoughts on whether my example theory is or isn't local. But I don't have the energy (or time) to start over from scratch and figure out what "determinism" means, etc...
|Similar Threads for: Validity of Relativity|
|About the validity of Einstein's 1905 Principle of Relativity||General Physics||0|
|About the validity of Einsteinís 1905 Principle of Relativity||Special & General Relativity||0|
|validity of a proof||Linear & Abstract Algebra||5|