What do violations of Bell's inequalities tell us about nature?

What do observed violation of Bell's inequality tell us about nature?

  • Nature is non-local

    Votes: 10 31.3%
  • Anti-realism (quantum measurement results do not pre-exist)

    Votes: 15 46.9%
  • Other: Superdeterminism, backward causation, many worlds, etc.

    Votes: 7 21.9%

  • Total voters
    32
  • #201
ttn said:
But none of that is relevant to the main point here. We don't need QM or any other fancy theory to tell us that pointers point in particular directions, that there's a table in front of me, a cat on the bed, etc. Physical facts like that are just available to direct sense perception. We know them more directly, with more certainty, than we can possibly ever know anything about obscure microscopic things.

But we are disussing the "locality" of theory QM. In order to do that, we need to identify its beables. As you point out in your paper, what the beables are depends on the particular theory in question. In standard QM, the individual outcomes can't be beables, because they don't exist. Just like you cannot assign beables status to nuclear properties in Newtonian gravity, you cannot assign beable status to individual outcomes in QM, because these theories don't account for these facts. QM doesn't even try to describe individual outcomes.

This is different in Bohmian mechanics, where the description is supplemented by position variables. If there is a position variable, then you can of course assign beable status to it if you want. But there is no such thing in standard QM. The beables of standard QM are the probability distributions and the mean values and so on. Maybe it helps you to put it this way: The prediction of QM for an individual outcome is the mean value. Of course, it's often wrong, but that just means that QM isn't good at predicting individual outcomes. Like Newtonian gravity isn't good at predicting the apsidal precession of mercury.

Now here is the simple plain fact. To whatever extent you are right that QM cannot account for these sorts of facts (and personally I think you are not right at all, i.e., I think Copenhagen QM *does* account for them, and it was one of Bohr's few valid insights to recognize that it is *crucial* that it be able to account for them) it ceases to be an empirically adequate theory.

Well, as i said, I'm not particularly talking about Bohr's exact point of view. I think nobody really shares Bohr's viewpoint exactly. In principle, everything should be put on the quantum side, so there is no classical side. The classical picture is only a useful tool.

If you don't think that quantum mechanics is an empirically adequate theory, just because it is unable to make predictions about every element of the world you observe, then you must also classify Newtonian gravity as empirically inadequate, because it doesn't predict radioactivity. Maybe Bohmian mechanics is "empirically adequate" for you, but that just means that "empirically adequate" isn't a good criterion to single out useful theories, because even though individual outcomes might exist in BM, it's still not able to make more accurate predictions about these than standard QM is.

I'm perfectly fine with theories with theories that don't describe every aspect of the world. In fact, if you want this, then you'd have to wait until someone finds the theory of everything (if it exists at all). Up to now, every theory we have has some weakness, where it doesn't describe nature accurately. I don't think that's a problem. The theories are still useful and we can classify them into categories like "local" and "realistic" and "empirically adequate" adn so on if we like to.

If I understand you correctly, you are saying that QM cannot account for the fact that something like a pointer (or a table or a cat or a planet) exists with definite properties.

Yes, I'm saying this. QM can only predict its mean value, its standard deviation (which might be very small for macroscopic objects and this is how the classical limit emerges) and other statistical properties.

I don't see an ontological problem with this. The world might just not be like we might naiviely imagine it to be. In fact, I'm completely agnostic with respect to whether there is more to reality than what my senses tell me. Maybe the world has classical properties, maybe it doesn't. I haven't found a way to decide this question one way or the other and standard QM doesn't help me to do so.
 
Last edited:
Physics news on Phys.org
  • #202
rubi said:
But we are disussing the "locality" of theory QM. In order to that, we need to identify its beables. As you point out in your paper, what the beables depends on the particular theory in question. In standard QM, the individual outcomes can't be beables, because they don't exist.

That last is what I (and Bohr) disagree(s) with. The individual outcomes absolutely do exist according to Copenhagen QM. They weren't *predictable* (with certainty) prior to the measurement, but once the measurement happens, one of the results *really occurs*. Yes, which one occurs is *random*; the theory does not predict this. But it does not deny that individual measurements have actual individual outcomes! That would be insane. Or more precisely, as I said before, that would mean that the theory is way wronger than anybody thought.

Concretely: Bob goes into his lab where there is a stern-gerlach apparatus. At noon (EST) he hits a button that makes a particle come out of a particle source, go through the magnets, and get detected by one or the other of two photodetectors. Each photodetector is wired up so that, instead of an audible "click", a little white flag with the words "I found the particle!" printed on it pops up into the air. Now on this particular occasion, at noon, it turns out that the flag on the lower detector pops up. That is -- if anything ever was -- a physical fact out there in nature. And if you are really saying that ordinary QM denies that any such thing happens, then ordinary QM is just simply *wrong*. It fails to describe the facts correctly.

Now for the record, as I've said, I think here it is you who is trivially wrong, not Copenhagen QM. I loathe Copenhagen QM. I think it's a terrible, indeed embarrassing, theory. But it's terrible/embarrassing because it doesn't really give any coherent *physical* account of the microscopic parts of the world; because it involves artificially dividing the world into these two realms, macro and micro; because the idea of distinct laws for these separate realms, and then special exceptions to those laws for the at-best-vaguely-defined situations called "measurements", is ridiculous for any theory with pretensions to fundamentality; etc. But despite all these (really serious) problems, I do concede that Copenhagen QM is at least an empirically adequate theory, in the sense that it says true things about what the directly observable aspects of the world are like and in particular makes the right statistical predictions for how things like the goofy little flags should work in the appropriate circumstances. It's like Ptolemy's theory of the solar system -- it makes the right predictions, but it just can't be the correct fundamental theory.


Just like you cannot assign beables status to nuclear properties in Newtonian gravity, you cannot assign beable status to individual outcomes in QM, because these theories don't account for these facts. QM doesn't even try to describe individual outcomes.

I think you are just taking "QM" to refer *exclusively* to the parts of the theory that pertain only to the so-called microscopic world. That is, you are not treating the usual textbook measurement axioms (and the associated ontological commitments!) as part of the theory. But (unless you are an Everettian, but let us here talk just about "ordinary QM") those parts of the theory really are absolutely crucial. Without them, the theory doesn't say anything at all about experimental outcomes (even the statistics thereof). That is, if you leave those parts out, you are truly left with a piece of math that is totally divorced from the physical world of ordinary experience, i.e., totally divorced from empirical data/evidence/science. Indeed, I think it would be accurate to say that this math is literally meaningless since there is nothing coherent left for it to refer to. Bohr, at least, understood quite well that, at the end of the day, the theory better say something about pointers, tables, cats, planets, flags, etc. I think Bohr was dead wrong insofar as he seems to have thought that this is *all* you could say anything about. To use one of Bell's apt words, Bohr thought the microscopic world was in some sense "unspeakable". That is dead wrong. It was a result of various empiricist/positivist strands of philosophy that were popular at the time, but that practically nobody outside of physics departments takes seriously anymore.


This is different in Bohmian mechanics, where the description is supplemented by position variables. If there is a position variable, then you can of course assign beable status to it if you want. But there is no such thing in standard QM.

Not in the micro-realm, that's true. But Copenhagen QM's full description of the world -- its full ontology -- is *not* simply the wave function for the micro-realm. It is the wave function for the micro-realm *and classical objects/properties for the macro-realm*.


The beables of standard QM are the probability distributions and the mean values and so on. Maybe it helps you to put it this way: The prediction of QM for an individual outcome is the mean value.

No, that is wrong, unless you are just speaking extremely loosely/imprecisely. The prediction of QM for an individual outcome is: the outcome will be one of the eigenvalues of the appropriate operator, with the probabilities of each possibility being given by the expectation value of the projector onto that eigenstate. Yes, you can of course calculate a probability-weighted average of these possible outcome values, the expectation/mean value. But QM absolutely does *not* predict that that mean value will be the outcome. If it did predict that, again, it would be simply, empirically, false. For example, here comes a particle (prepared in the "spin up along x" state) to a SG device that will measure its spin along the z direction. The expectation value is zero. But the actual outcome is never zero, it is always either +hbar/2 or -hbar/2. I know you understand all this, but what you said above is really, badly wrong, at least as written.


Of course, it's often wrong, but that just means that QM isn't good at predicting individual outcomes. Like Newtonian gravity isn't good at predicting the apsidal precession of mercury.

No, that is not at all the right way to think about it. It's not that QM is always (or almost always) wrong. It's rather that it only makes probabilistic predictions. It says (in the example just above) that there's a 50% chance that the outcome will be +hbar/2 and a 50% chance that the outcome will be -hbar/2. When you find out that, in fact, for a given particle, the outcome was -hbar/2, you do not say "QM was wrong". You say "Cool, that's perfectly consistent with what QM said." If you want to know whether QM's predictions are right, then yes, you need to run the experiment a million times and look at the statistics to make sure it really is +hbar/2 about half the time, etc. But it is not at all that the prediction for the individual event was *wrong*. The prediction for the individual event was probabilistic, which is absolutely consistent with what in fact ends up happening in the individual event.



Well, as i said, I'm not particularly talking about Bohr's exact point of view. I think nobody really shares Bohr's viewpoint exactly. In principle, everything should be put on the quantum side, so there is no classical side. The classical picture is only a useful tool.

But if you do that (and again here leaving aside the possible Everettian "out") you get nonsense. That is, you get something that is just as wrong -- just as inconsistent with what we see with our naked eyes actually happening in the lab -- as the denial that there is any physically real definite macro-state.



I'm perfectly fine with theories with theories that don't describe every aspect of the world.

Me too.



Yes, I'm saying this. QM can only predict its mean value, its standard deviation (which might be very small for macroscopic objects and this is how the classical limit emerges) and other statistical properties.

This is simply not true. QM can *also* predict the *possible* definite outcome values. In general, there are several of these, i.e., many different possible outcomes with nonzero probabilities. Despite the flaws in the theory, it is right about these.


I don't see an ontological problem with this. The world might just not be like we might naiviely imagine it to be.

Are you really equating *direct sense perception* -- surely the foundation of all properly empirical science -- with "naive imagination"?



In fact, I'm completely agnostic with respect to whether there is more to reality than what my senses tell me.

Well, I think it's pretty naive to think that our senses tell us everything that is true of the world. (For example, that would mean the world disappears every time you blink.) But this isn't even what's at issue here. The question is just whether what your senses tell you is at least part of what's real. When that one flag pops up, and you see this, it really popped up -- and any theory that says otherwise is ipso facto rendered false.
 
  • #203
stevendaryl said:
In English, we explain this case as follows: (Let me change it slightly from previously)

There are two boxes, one is labeled "Alice", to be sent to Alice, and the other labeled "Bob" to be sent to Bob. We flip a coin, and if it is heads, we put the green ball in Alice's box, and the red ball in Bob's box. If it is tails, we put the red ball in Alice's box, and the green ball in Bob's box.

In this case, the hidden variable \lambda has two possible values, H, for "heads" and T for "tails". Then our probabilities are
(letting A mean "Alice gets green" and B mean "Bob gets red".)

  • P(H) = P(T) = \frac{1}{2}
  • P(H T) = 0
  • P(A \vert H) = P(B \vert H) = 1
  • P(A \vert T) = P(B \vert T) = 0

We can compute other probabilities as follows:
  • P(AB) = P(AB \vert H) \cdot P(H) + P(AB \vert T) \cdot P(T) = \frac{1}{2}
  • P(A) = P(A \vert H) \cdot P(H) + P(A \vert T) \cdot P(T) = \frac{1}{2}
  • P(B) = P(B \vert H) \cdot P(H) + P(B \vert T) \cdot P(T) = \frac{1}{2}
  • P(A \vert B) = P(AB)/P(B) = 1

Bell's criterion for the case of A and B being causally separated is not

P(A \vert B) = P(A)

(which is false). Instead, it's

P(A \vert B \lambda) = P(A \vert \lambda)
where \lambda is a complete specification of the relevant information in the common past of A and B, which is true.

stevendaryl (and ttn) thank you for the replies. I will need some time to digest them.

rlduncan
 
  • #204
rlduncan said:
stevendaryl (and ttn) thank you for the replies. I will need some time to digest them.

Cool. =)

Here is something else closely related to this for you and others to consider. Assuming we adopt Bell's definition of locality, and restricting our attention to the case where Alice and Bob measure along parallel axes (which is completely equivalent to the red/green balls), we have that

P(A,B|λ) = P_alice(A|λ) P_bob(B|λ).

Here λ is a complete specification of the state of the particles/balls according to some candidate theory (QM, Bohm, whatever). A=±1 and B=±1 are the outcomes on each side (+1 means spin up, for spinny particles, or +1 means "red" for the balls... note this is different from how stevendaryl used the same symbols.)

Now consider one of the particular joint outcomes that never happens, say A=+1 and B=+1. Let's allow that, for the same preparation procedure (producing what QM calls the "singlet state", or some random coin flippy thing that decides which ball goes where, etc.), there are perhaps many different λs that are sometimes produced. Still, if we run the experiment a bajillion times, we *never* see the joint outcome A=+1, B=+1. So it must be that the probability P(+1,+1|λ) = 0 for *all* possible λs that this preparation procedure sometimes produces.

Plugging into the factorization condition above (that remember follows from Bell's definition of locality) we then have that, for all λ,

0 = P(+1,+1 | λ) = P_alice(+1|λ) P_bob(+1|λ).

OK, so these two probabilities multiply to zero. So at least one of them has to equal zero.

You can now easily see that the general class of λs has to break into two sub-classes:

{λa}: those λs for which P_alice(+1|λ)=0, i.e., those λs for which Alice's measurement is guaranteed *not* to yield A=+1, i.e., those λs for which Alice's measurement is guaranteed to instead yield A=-1. Now if, for any λ in {λa}, P_bob(+1|λ) were anything other than 100%, we would occasionally see the joint outcome A=-1, B=-1. Since in fact we never see this, it must be that, for all λ in {λa}, P_alice(-1|λ)=100% and P_bob(+1|λ) = 100%. That is, λ being in {λa} means that both particles carry pre-measurement non-contextual "hidden variables" that pre-determine the outcomes A=-1 and B=+1.

{λb}: those λs for which P_bob(+1|λ)=0, i.e., those λs for which Bob's measurement is guaranteed *not* to yield B=+1, i.e., those λs for which Bob's measurement is guaranteed to instead yield B=-1. Now if, for any λ in {λb}, P_alice(+1|λ) were anything other than 100%, we would occasionally see the joint outcome A=-1, B=-1. Since in fact we never see this, it must be that, for all λ in {λb}, P_alice(+1|λ)=100% and P_bob(-1|λ) = 100%. That is, λ being in {λb} means that both particles carry pre-measurement non-contextual "hidden variables" that pre-determine the outcomes A=+1 and B=-1.

Please appreciate that this is merely a formalization of the EPR argument *from locality to* these deterministic hidden variables. In terms of the red and green balls, it shows that the *only way* to locally explain why Alice's and Bob's balls are always *different colors* is to say that there was some definite, though perhaps unknown, fact of the matter about the colors (perhaps varying randomly from one trial to the next) even prior to the observations. This of course is just the ordinary/obvious/everyday way of explaining what is going on with the balls. If somebody wanted to be weird, they could deny that the balls have definite colors until looked at later, but this would require nonlocality -- in particular, one person looking at his ball would fix not only the color of that ball but would *also* have to fix the color of the distant ball. That is what the simple little theorem above proves. "Realism" (meaning here pre-determined values, "hidden variables") is *required* if you want to try to explain the perfect correlations *locally*.
 
  • #205
Thanks ttn. I agree and well presented which helps alot.
 
  • #206
ttn said:
That last is what I (and Bohr) disagree(s) with. The individual outcomes absolutely do exist according to Copenhagen QM. They weren't *predictable* (with certainty) prior to the measurement, but once the measurement happens, one of the results *really occurs*. Yes, which one occurs is *random*; the theory does not predict this. But it does not deny that individual measurements have actual individual outcomes! That would be insane. Or more precisely, as I said before, that would mean that the theory is way wronger than anybody thought.

Every theory is wrong in the strict sense. Wrong or right are not useful properties to classify models. A model is always different from reality, because it is only a model.

Do you really believe that the world is split into a classical and a quantum world? I don't think there is any physicist today, who really believes this. There is only one world without a split. Decoherence has shown us how the classical world emerges from quantum mechanics. If you believe in the quantum-classical split, then you neglect 40 years of research. The world is either quantum or classical (or something completely different). And i believe it's rather quantum than classical. The quantum-classical split is obsolete. (I'm not saying that there aren't any open problems)

Yes, quantum mechanics does restrict the set of possible measurement values. But that doesn't mean that it predicts that there should be reality asribed to these outcomes. To put it in as few words as possible: QM does not assert that after a measurement a particle acquired the real property of having a position. Instead it asserts that after a measurement, the probability distribution associated with the position observable is very sharply peaked about a certain value. The particle never has the property of having a position. Not before the measurement, not after it and not even in the instant when it is measured. It has only an associated probability distribution that might in certain situations be sharply peaked. The classical picture we perceive is an emergent phenomenon that QM predicts if you include the mesaurement apparatus and the environment into the quantum description and then coarse-grain it by computing partial traces and so on. Are the measured values "real" or is this reality an emergent phenomenon? I think the latter is the better way to think about it.

If this is peculiar to you and you want to discard QM because of this, then this is your personal choice. If you think QM is too weird to be true and one should not further pursue it and rather look for different theories, then it is also your personal choice. But there's no point to argue for one way or the other. At the moment, it's a matter of believe, much like it is a matter of believe of whether there is a god or not. The question of whether there are real counterparts to the peaks of the QM probability distributions cannot be answered. What does "real" mean anyway, if i can only gain knowledge about reality through my senses? Is there any reality beyond what my senses tell me? I think these kind of questions are irrelevant to physics. I have chosen to stay agnostic with respect to it and i don't feel uncomfortable about it.

Concretely: Bob goes into his lab where there is a stern-gerlach apparatus. At noon (EST) he hits a button that makes a particle come out of a particle source, go through the magnets, and get detected by one or the other of two photodetectors. Each photodetector is wired up so that, instead of an audible "click", a little white flag with the words "I found the particle!" printed on it pops up into the air. Now on this particular occasion, at noon, it turns out that the flag on the lower detector pops up. That is -- if anything ever was -- a physical fact out there in nature. And if you are really saying that ordinary QM denies that any such thing happens, then ordinary QM is just simply *wrong*. It fails to describe the facts correctly.

I'm not saying that QM denies that such a thing happens. I'm saying that if you describe that system completely quantum mechanically (including the apparatus), then QM will predict that that the probability distribution of a little white flag appearing will be sharply peaked. Of course, it's completely impractical to include the apparatus into the QM description for such simple experiments. It's only of academic interest. But in principle it's the right way to look at it and it can be done. People study such models. If you aren't familiar with this, you might want to get the book by Maximilian Schlosshauer for starters.

Now for the record, as I've said, I think here it is you who is trivially wrong, not Copenhagen QM. I loathe Copenhagen QM. I think it's a terrible, indeed embarrassing, theory. But it's terrible/embarrassing because it doesn't really give any coherent *physical* account of the microscopic parts of the world; because it involves artificially dividing the world into these two realms, macro and micro; because the idea of distinct laws for these separate realms, and then special exceptions to those laws for the at-best-vaguely-defined situations called "measurements", is ridiculous for any theory with pretensions to fundamentality; etc. But despite all these (really serious) problems, I do concede that Copenhagen QM is at least an empirically adequate theory, in the sense that it says true things about what the directly observable aspects of the world are like and in particular makes the right statistical predictions for how things like the goofy little flags should work in the appropriate circumstances. It's like Ptolemy's theory of the solar system -- it makes the right predictions, but it just can't be the correct fundamental theory.

As i said, the quantum-classical split is obsolete. It's obviously wrong. There is only the quantum part of the theory left. Everything else can in principle be described using decoherence. (At least we believe this to be the case. It's still a actively researched.) If that makes modern quantum researchers non-Copenhagenists, then it's okay. Let's call them quantum instrumentalists. I think that's a fair description.

I think you are just taking "QM" to refer *exclusively* to the parts of the theory that pertain only to the so-called microscopic world. That is, you are not treating the usual textbook measurement axioms (and the associated ontological commitments!) as part of the theory. But (unless you are an Everettian, but let us here talk just about "ordinary QM") those parts of the theory really are absolutely crucial. Without them, the theory doesn't say anything at all about experimental outcomes (even the statistics thereof).

Now we are progressing. QM doesn't say anything about the outcomes. Yes, that's true. But the quantum part of the theory does say everything about the statistics. Just compute |\psi(x)|^2, \int x|\psi(x)|^2\mathrm d x and so on. You don't need any classical supplement of the theory in order to compute these things.

That is, if you leave those parts out, you are truly left with a piece of math that is totally divorced from the physical world of ordinary experience, i.e., totally divorced from empirical data/evidence/science.

Math is always divorced from the physical world. Whats wrong is that it is also divorced from empirical data. The math of quantum mechanics predicts accurately the statistical properties of the empirical data. Just compute the things i wrote above. I hope you don't deny this. That would be almost delusional.

Indeed, I think it would be accurate to say that this math is literally meaningless since there is nothing coherent left for it to refer to. Bohr, at least, understood quite well that, at the end of the day, the theory better say something about pointers, tables, cats, planets, flags, etc. I think Bohr was dead wrong insofar as he seems to have thought that this is *all* you could say anything about. To use one of Bell's apt words, Bohr thought the microscopic world was in some sense "unspeakable". That is dead wrong. It was a result of various empiricist/positivist strands of philosophy that were popular at the time, but that practically nobody outside of physics departments takes seriously anymore.

Please let's get rid of the ridiculous quantum-classical split. It's so obvious that it's wrong, especially after the success of decoherence. I'm not a Copenhagenist in the sense you describe it. I think no serious researcher is nowadays.

Not in the micro-realm, that's true. But Copenhagen QM's full description of the world -- its full ontology -- is *not* simply the wave function for the micro-realm. It is the wave function for the micro-realm *and classical objects/properties for the macro-realm*.

You are fighting a straw man here. I don't want to describe the macroworld classically. If i want to talk about outcomes quantum mechanically, i have to include the measurement apparatus into the quantum description. Otherwise, I'm not using pure quantum mechanics. I'd be using a strange mixture of classical and quantum mechanics. I don't care about this mixture theory. If you want to use QM in order to talk about some aspects of nature, then you have to include these aspects of nature into your QM model. If these aspects of nature are pointers and cats, then you have to include pointers and cats into your QM model. There is no reason, why the measurement apparatus shouldn't itself behave quantum mechanical. After all, it's made of the same atoms that your quantum system is made of.

No, that is wrong, unless you are just speaking extremely loosely/imprecisely. The prediction of QM for an individual outcome is: the outcome will be one of the eigenvalues of the appropriate operator, with the probabilities of each possibility being given by the expectation value of the projector onto that eigenstate.

There is no prediction of QM for individual outcomes. Yes, QM restricts the space of measurement values, but that's not a prediction about what the outcome of an experiment will be. Here's a classical analogon: Classical mechanics says that the values of the position variable can be between -\infty and \infty. But the prediction of the position is a function x(t) = x_0 \sin(\omega t). An analogue to this function x(t) is missing in QM. Yes, the range of measurable values is restricted, but QM is unable to predict a certain value. That's because the existence of an underlying position value is neglected. Of course, QM needs to specify the range of the values, the probability distributions reach over. Otherwise, it would be nonsense to talk about probabilities in the first place. The range is just part of a complete specification of the predicted probability distribution.

Yes, you can of course calculate a probability-weighted average of these possible outcome values, the expectation/mean value. But QM absolutely does *not* predict that that mean value will be the outcome. If it did predict that, again, it would be simply, empirically, false.

Yes, you are right. QM doesn't predict that the measured value will be the mean value. That's because QM doesn't predict at all, what the measured value will be. But if you insist on squeezing a prediction about an individual outcome out of QM, then the best thing you can possibly do is take the mean value. I repeat: You should only do this if you insist and it will give you wrong predictions, albeit they might sometimes be close to the measured values if the standard deviation is small enough.

For example, here comes a particle (prepared in the "spin up along x" state) to a SG device that will measure its spin along the z direction. The expectation value is zero. But the actual outcome is never zero, it is always either +hbar/2 or -hbar/2. I know you understand all this, but what you said above is really, badly wrong, at least as written.
Yes, i agree it is badly written. I was just trying to say that if you really really want to have a prediction about an invidiual outcome from pure QM theory, then the best thing QM offers is the mean value, although that value can give you badly wrong predictions sometimes. After all, it's not it's purpose to predict individual outcomes. But there is no other way to squeeze information about individual outcomes out of QM. Just knowing their range is not a prediction unless maybe the range contains only one point. In that case, i would agree that it would be a prediction of an individual outcome. But what operator corresponding to an observable in QM has only one point in its spectrum?

No, that is not at all the right way to think about it. It's not that QM is always (or almost always) wrong. It's rather that it only makes probabilistic predictions.

Exatly. It doesn't even try to make predictions about individual outcomes. That's why it is not wrong about them. It can't be. "Si tacuisses, philosophus mansisses" holds also for physical models it seems. :)

It says (in the example just above) that there's a 50% chance that the outcome will be +hbar/2 and a 50% chance that the outcome will be -hbar/2. When you find out that, in fact, for a given particle, the outcome was -hbar/2, you do not say "QM was wrong". You say "Cool, that's perfectly consistent with what QM said." If you want to know whether QM's predictions are right, then yes, you need to run the experiment a million times and look at the statistics to make sure it really is +hbar/2 about half the time, etc. But it is not at all that the prediction for the individual event was *wrong*. The prediction for the individual event was probabilistic, which is absolutely consistent with what in fact ends up happening in the individual event.

Yes, it's consistent with QM, because the value lays in the spectrum of the associated observable. But it wasn't a prediction. The value hbar/2 for measurement number 1327, carried out at 12:07 pm wasn't predicted by QM.

But if you do that (and again here leaving aside the possible Everettian "out") you get nonsense. That is, you get something that is just as wrong -- just as inconsistent with what we see with our naked eyes actually happening in the lab -- as the denial that there is any physically real definite macro-state.

Here's where we don't agree. I think that what you call "physically real definite macro-state" should in principle also have a quantum mechanical description. We just normally don't include it into the quantum model. And if we did describe it quantum mechanically, we would get probability distributions for the pointers of the apparati, instead of definite values. They might be sharply peaked, but that's not the point.

This is simply not true. QM can *also* predict the *possible* definite outcome values. In general, there are several of these, i.e., many different possible outcomes with nonzero probabilities. Despite the flaws in the theory, it is right about these.

But the possible outcome values, that is the spectrum of the observables aren't predictions the outcomes themselves in the sense that it's not possible to know what value the next outcome may take (unless you have a one-point-spectrum). The set of possible outcome values is a prediction of QM, but still QM doesn't predict that the quantum objects ever acquire definite values.

Are you really equating *direct sense perception* -- surely the foundation of all properly empirical science -- with "naive imagination"?

I think we can't trust our senses. Our brains forges a picture of the world for us, that doesn't necessarily have anything to do with the "real world", whatever that is. Look at the keyboard in front of you. Does it really have a color or is the color you see only an emergent phenomenon? The same goes for things like position. Is the position or a particle really a "real thing" or an emergent phenomenon our brain tricks us into believing? I think these questions are meaningless and are more religion or philosophy than actual physics. I'm just saying that it's naive to believe that the "real world" is exactly what we imagine it to be, especially after all the really strange things that physics has discovered. In fact, every information we acquire first runs through billions of neurons before it is sent to the visual cortex. Who knows to what extent the information is distorted? After all, we know very little about the inner workings of the brain and our mind. I'm fine with the idea that my perception of the world doesn't necessarily need to have exact counterparts in "reality" (again, whatever that is). What if string theory turns out to be "right" and our world is really 11-dimensional? Could you accept that? For me that'd be just as weird as QM and GR together, but i would manage to accustom to this.

When that one flag pops up, and you see this, it really popped up -- and any theory that says otherwise is ipso facto rendered false.

A flag popping up is consistent with QM. It's just not a prediction.
 
  • #207
rubi said:
Decoherence has shown us how the classical world emerges from quantum mechanics.

decoherence is not enough to explain the emergence of classicalty, this has already been discussed here.
 
  • #208
"If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity. (Einstein, Podolsky, Rosen 1935, p. 777)"

that`s all just an element of the reality not the reality itself.
 
  • #209
audioloop said:
decoherence is not enough to explain the emergence of classicalty, this has already been discussed here.

I know that it doesn't solve all the problems. But i think splitting the world into "quantum" and "classical" is wrong (but useful for practical purposes). If we want to use quantum mechanics to describe the world, then we have to live with the fact that it's doesn't predict individual outcomes and we can't squeeze a completely classical picture out of it. It's just maybe a limitation of the theory.

I really don't want to end up in a philosophical debate over this. I only wanted to make my point clear that individual outcomes aren't beables of standard QM. If the property "position" doesn't exist, then it can't be a beable. You need to supplement the theory with additional elements (like a quantum-classical-split) in order to even be able to talk about something like "definite position" in QM at all. Without such a supplement, usual QM stands on its own and predicts only statistics.
 
  • #210
Rubi... It's clear we are not on the same page here. I've basically already said, as clearly as I know how, what I think is wrong with your position. I will try here to briefly clarify some points of apparent miscommunication, but there is no point continuing to argue about the central point at issue here. Our views have both been made clear.

Do you really believe that the world is split into a classical and a quantum world? I don't think there is any physicist today, who really believes this. There is only one world without a split.

I think we agree here. No, I don't believe the world is split. But Bohr absolutely did! And this notion of a "shifty split" (as Bell called it) is built into the structure of ordinary QM. This is closely related to what is usually called the measurement problem. It sounds like we agree that that's a problem, and that therefore a new theory (which doesn't suffer the problem, and which doesn't divide the world in two) is needed.


Decoherence has shown us how the classical world emerges from quantum mechanics.

No it has not. That's of course a complicated and controversial statement, so take this simply as a report of my view (rather than an attempt to argue for it).


Yes, quantum mechanics does restrict the set of possible measurement values. But that doesn't mean that it predicts that there should be reality asribed to these outcomes.

My point has been that even ordinary QM ascribes reality to the outcomes *after they occur*. That is all that it does, true -- it denies the existence of any pre-measurement values ("hidden variables") in general. But that is all that is necessary here. Remember the context in which this came up. Bell's definition of locality is in terms of "beables". There was a question about whether the *outcomes* (often called "A" and "B") count as beables according to QM. I say: they do. They are beables that evolve *stochastically* -- you cannot predict in advance what "A" or "B" might be. But once they be, they be.



To put it in as few words as possible: QM does not assert that after a measurement a particle acquired the real property of having a position.


That's debatable, but irrelevant. Take "A" and "B" to refer (in the usual spin-based EPR-Bell scenario) not to "what the spins of the particles really are after the measurement" (I agree that we probably shouldn't interpret ordinary QM as claiming that any such thing exists, even after the measurement) but rather "where the flash occurred behind the SG magnets -- up here, or down there" (or if you prefer, "which of the two goofy flags popped up"). Those latter sorts of directly-perceivable, uncontroversially-real physical facts -- those latter sorts of *beables* -- are what phrases like "the actual outcomes" or symbols like "A" and "B" refer to.



As i said, the quantum-classical split is obsolete. It's obviously wrong. There is only the quantum part of the theory left. Everything else can in principle be described using decoherence. (At least we believe this to be the case. It's still a actively researched.) If that makes modern quantum researchers non-Copenhagenists, then it's okay. Let's call them quantum instrumentalists. I think that's a fair description.

What the instrumentalists (or whatever you want to call them) miss is that, if you simply abandon the separate classical/macro realm that Bohr (awkwardly) had to just posit, there are no local beables left in the theory, and hence nothing like tables, chairs, pointers, cats, etc. to be found in the theory. Yes, there is a big wave function, on a big configuration space, some of whose axes correspond in some way to the degrees of freedom that (classically) one would associate with the particles composing the tables, chairs, etc. But there are no actual particles, or any other physically real stuff in 3D space, for the tables and chairs to be made of.

This, incidentally, is why, by getting rid of the whole macro/classical realm and all its associated laws, MWI solves the measurement problem beautifully, but introduces a new (and much more severe!) problem (that Copenhagen QM did *not* suffer from). I guess one might call that new problem the "reality" problem, though it would be nice to find a less wacky sounding name...


Now we are progressing. QM doesn't say anything about the outcomes. Yes, that's true. But the quantum part of the theory does say everything about the statistics. Just compute |\psi(x)|^2, \int x|\psi(x)|^2\mathrm d x and so on. You don't need any classical supplement of the theory in order to compute these things.

You need the classical part of the theory (again, in the context of Copenhagen QM here...) in order to give these quantities you compute something to be *about*. It's lovely to be able to calculate something that you *call* "the probability for the top flag to pop up", but if there is no actually existing physically real top flag, which actually really physically pops up or not, then what in the world are you even talking about? I mean that question literally, and the answer, literally, is: nothing.

Here's where we don't agree. I think that what you call "physically real definite macro-state" should in principle also have a quantum mechanical description. We just normally don't include it into the quantum model. And if we did describe it quantum mechanically, we would get probability distributions for the pointers of the apparati, instead of definite values. They might be sharply peaked, but that's not the point.

No, if you describe that stuff QMically (in the sense you mean), you get a big Schroedinger cat state. Yes, yes, you want to consider the reduced density matrix and then *interpret* that as meaning one or the other of the decohered options, with certain probabilities. But surely you can see that a swindle occurs here, in going from the *and* (which is uncontroversially present in the wave function) to the *or* which you get out only after waving your arms and saying magic words.



I think we can't trust our senses.

Then it is impossible to base conclusions (like, for example, the conclusion that classical mechanics failed to correctly predict things like the H spectrum and all the other stuff that convinced us to abandon classical mechanics in favor of QM!) on empirical data, period.
 
  • #211
rubi said:
I know that it doesn't solve all the problems. But i think splitting the world into "quantum" and "classical" is wrong (but useful for practical purposes).


i agree and just another theory.
 
  • #212
ttn said:
in response to "we can't trust our senses"
Then it is impossible to base conclusions (like, for example, the conclusion that classical mechanics failed to correctly predict things like the H spectrum and all the other stuff that convinced us to abandon classical mechanics in favor of QM!) on empirical data, period.

The meaning of "we can't trust our senses" isn't that "our senses give us no information about the world", it's just that we can't assume that there is a close relationship between the way things are and the way things appear to our senses.
 
  • #213
stevendaryl said:
The meaning of "we can't trust our senses" isn't that "our senses give us no information about the world", it's just that we can't assume that there is a close relationship between the way things are and the way things appear to our senses.

Yes, obviously this is a complex issue. Is the white color of the flag intrinsic in the flag, or is it somehow a relational property between the flag and my sensory apparatus, or what? All of these sorts of things are tricky and subtle and probably none of us want to get into them here! My point is just: if you think we can get any useful information at all about the external world from our senses (and I certainly do), then surely this will have to include basic facts like that there is a 3D world full of stuff that moves around and interacts and that includes things like little flag-shaped hunks of material that sometimes pop up and down. My view is that, if you regard that as even-possibly-mistaken, then you are never going to get anything remotely resembling empirical science off the ground; certainly, if such things "might be wrong", then *literally everything we have ever taken as empirical evidence for anything in science ever* "might be wrong", and then, well, we're totally at sea.
 
  • #214
Hi ttn,

i've read your post completely, but i think it's better to answer in a shorter way instead of considering each of your statements separately. After all, we have both made our points more or less clear. If you want me to address any particular statement more carefully than i will do now, please let me know.

--

First of all, i agree that decoherence hasn't solved the quantum-classical transistion completely and that it's controversial. My main point wasn't to talk about decoherence, but to argue that the individual outcomes aren't more real than the wave function itself if we include them into the quantum picture. That's why aren't part of reality, but an emergent phenomenon, although it isn't yet clear how the mechanism of this emergence works and it's also not clear that quantum mechanics can explain it in the end.

--

Here is how i see the connection between theory and experiment:

Quantum mechanics is a theory that contains the following mathematical entities: A Hilbert space with an inner product, a wave function, which is an element of this Hilbert space and some self-adjoint operators corresponding to observables. That's all. Nothing more and nothing less. In particular, there is no real-valued function x(t). With only these few mathematical entities, you are able to compute probability distributions, mean values, standard deviations, correlations and so on. All these mathematical entities are just strings of symbols on a piece of paper. Let's call this paper "QM axioms". Symbolic manipulation of these strings allows us to compute numbers (which are also strings). For convenience, we will collect all these numbers on a piece of paper called "QM predictions".

Now, an experiment is something more down to the earth. An experimentator has a procedure to prepare his apparatus identically a 1000 times. He can repeat an experiment and write down the strings on his display on another piece of paper called "Experimental outcomes". He can use mathematical methods to compute from these values their statistical properties. We will collect all the statistical properties of the measured values on a 4th piece of paper called "Statistical analysis of the experiment".

The wonder of physics is that the values on the paper "QM predictions" for some reason coincide with the values on the paper "Statistical analysis of the experiment". Notice however that you can't compare the papers "QM axioms" and "Experimental outcomes". We can't use any of the papers "QM axioms" and "QM predictions" to write yet another paper called "More QM predictions" that can be compared to the paper "Experimental outcomes". Notice also that our ability to compare the "QM predictions" paper to the "Statistical analysis of the experiment" paper is independent of the ontological status of the "real world". It's completely independent of realism, non-realism, solipsism or whatever school of philosophy you advocate. It's also independent of whether our senses tell us the truth or not. Everyone has the ability to compare these two papersm, independent of whether the outcomes really exist or whether they are emergent from an arbitrary unknown mechanism.

QM is totally divorced from the experimental side of this whole process. The connection between QM and experiment is solely statistics. The individual outcomes of the experiment can't be associated with any mathematical entity of the theory, because there is no such entity in the axioms list. When you talk about individual outcomes as beables, you really are talking about the experimental side. But Bell said, that beables are elements of the theory. However, the theory claims only to predict some aspects of the numbers you collect in the experiments. In particular, it doesn't claim to predict any of the numbers that are written on the "Experimental outcomes" paper, because there is no mathematical object in the theory that can be associated with these outcomes. They are purely on the experimental side. That's why they aren't beables of the theory.

Yes, QM doesn't have definitely existing tables, cats or flags in the model. But that's not a problem, because a model doesn't need to describe every aspect of the world. If the model chooses to only predict statistics without making any claims about "real objects", then that's fine. You can also have a probability theory that predicts probabilities and mean values about dices or coins without making any statement about the existence of dices or coins and their ontology. The dice-theory could for example be: p(x) = 1/6 for x \in \{1,2,3,4,5,6\}. It doesn't make any reference to a dice, yet it completely describes the statistics that you will find if you throw the dice a hundred times. It's also agnostic with respect to every aspect of the dice other than it's statistics. It's agnostic with respect to it's color, it's material and even it's existence.

I think i can't explain my point any clearer than this now. I hope you can at least understand my thinking and why i find it appropriate.
 
Last edited:
  • #215
rubi said:
I think i can't explain my point any clearer than this now. I hope you can at least understand my thinking and why i find it appropriate.

I think I do understand it. For you, QM is *merely* a mathematical algorithm for generating statistical predictions. It is not actually a *physical theory* at all. I'm not sure that's the wrong way to understand "ordinary QM". It wasn't Bohr's way, for sure. But in many respects it is more sensible than Bohr's way -- for example, as I think we agree, Bohr's way (involving the shifty ontological split) is crazy and obviously wrong. However, in another crucial respect, I think Bohr's way is much better. Physics is physics, not math. Surely it must be the end goal always to say what the world is like. So if you have some mathematical statistics-generating algorithm that really truly says *nothing* about the physical world, that is totally inadequate. It may be perfectly useful to have it, but it is not a physical theory and I think any true physicist wants a satisfactory physical theory and won't be satisfied by anything less. Hence the search for theories (like Bohm's theory, GRWm/f, MWI) which actually tell (or, in the case of MWI, at least purport to tell) a coherent story about what the *world* is like physically -- a story which doesn't involve any shifty splits and which, at the end of the day, both produces recognizable macroscopic objects and gets the details right for the statistics of how often they should move this way and that.
 
  • #216
Tomorrow my regular teaching duties resume, so I won't have time to continue posting on this thread with anything like the frequency of the last week (and perhaps not at all). Thanks to all of you for the stimulating discussion. I learned about a few new things, one of which turned out to be a dud, but I'm still hopeful about this, which I plan to read tomorrow:

http://prl.aps.org/abstract/PRL/v48/i19/p1299_1


Just as one final thought on the original topic of the thread, I hope people who voted for "anti-realism" in the poll will make sure not to miss my post #204 in which I sketch a mathematically rigorous version of the EPR argument *from locality to* what (I think) people who voted "anti-realism" mean by "realism". Clearly, just as a matter of sheer elementary logic, anybody who thinks that we can elude the spectre of nonlocality by denying (this) "realism", has something pretty serious to think about there. I will note also that, despite a couple of half-hearted attempts, nobody rose to the challenge of showing how the perfect correlations (observed in the usual EPR-Bell scenario when a=b) can be explained by a local but non-realist model. From the point of view of the theorem in #204 this is of course not surprising: "realism" (meaning here deterministic non-contextual counterfactually-definite hidden variables) is the *only* way to explain these particular correlations locally. The correlations and the assumption of locality *logically entail* "realism". That is what that little mini-theorem says.

I therefore declare all the votes for "anti-realism" to be void, and hence the correct answer, "non-locality", to be the winner of the poll. :smile:
 
  • #217
ttn said:
Yes, obviously this is a complex issue. Is the white color of the flag intrinsic in the flag, or is it somehow a relational property between the flag and my sensory apparatus, or what? All of these sorts of things are tricky and subtle and probably none of us want to get into them here! My point is just: if you think we can get any useful information at all about the external world from our senses (and I certainly do), then surely this will have to include basic facts like that there is a 3D world full of stuff that moves around and interacts and that includes things like little flag-shaped hunks of material that sometimes pop up and down. My view is that, if you regard that as even-possibly-mistaken, then you are never going to get anything remotely resembling empirical science off the ground; certainly, if such things "might be wrong", then *literally everything we have ever taken as empirical evidence for anything in science ever* "might be wrong", and then, well, we're totally at sea.


I don’t follow this I’m afraid (or perhaps I should tentatively say I don’t agree with it!). Surely, all we have to work with is phenomena, the scientific method involving testability works within this framework and it is that framework that I refer to as empirical reality. This (our) reality of phenomena exists within space and time and involves all the phenomena of mechanisms that cause, as you say, flags to pop up and down and everything else that we experience as phenomena. But, to preempt what I say below, I don't consider that space and time, cause and effect or any other familiar and scientific notions exist in that manner outside of phenomena, i.e within independent reality. As far as I can work out, holding such a view in no way diminishes the power of the scientific method, the models work and often work exceedingly well, it's just that I don't extrapolate those models with their scientific credentials to an area outside of the realm in which they were created and tested, i.e. to the realm of independent reality. There is nothing stopping anyone extrapolating them of course to independent reality, but then they cease to become empirical models (how can an empirical model be valid within an arena that lay outside of empiricism), rather I think they become philosophical conjecture because of the reasons I outline below.

What scientists do is to try and step outside of phenomena and apply their empirically verified models to independent reality and they do so via various flavours of realism. Realist conceptions are composed of two elements. The first consists of the notion of a reality conceived as totally independent of our possible means of knowing it (independent reality) – along with the hypothesis that we do have access to this reality, in the sense that we can say something “true” concerning it. But this hypothesis, is not scientifically provable (which is not to say it is incorrect of course and there are legitimate means in which to assert the theory in terms of the no miracle argument, but there are equally valid counter arguments that can be made). The second of these two elements concerns a representation we build up of independent reality worked out from the phenomena, but since the first element can only be an hypothesis, the second element can obviously not be tested and hence lay outside of the scientific method.

The question as to how close empirical reality is to independent reality is an untestable one, so I tend to stay on the side of caution – a miss is as good as a mile, I can’t see the point of assuming a degree of closeness, as if perhaps we only need to concern ourselves with the mechanistic alteration to the “thing in it’s self” by the characteristics of the eye – that to me seems a bit of a cop out, it restores a comfortable feeling that what lay within independent reality is a rough approximation of phenomena. Such a view can act as a counter to the uncomfortable logic associated with taking on board the notion of our reality as existing only as phenomena, and I would tentatively suggest that this may be the stance you take up, it allows a sense of scientific accessibility to some aspects of independent reality, but as I say, for me a miss is as good as a mile. So I go the whole hog, I don’t presuppose that we can know anything about independent reality using familiar notions and the scientific method, in fact I don’t consider that independent reality is embedded in space and time. But none of this stops me in any manner at all in seeing empirical reality as being entirely valid, it is our reality and it works and I don’t invoke solipsism or idealism here. I consider the notion of an unknowable independent reality to be perfectly adequate in providing the means in which to philosophically envisage empirical reality as an “emergent” (“emergent” in this sense not referring to any familiar notions) entity governed by laws that have their “origin” (“origin” here not referring to any familiar notions) within independent reality rather than being entirely referenced to minds (or a single mind) as per radical idealism or solipsism. Of course the logic of this stance entails giving up the notion of (for example) stars as having an intrinsic historical time line outside of empirical reality, from this perspective there was no birth of the star outside of empirical reality, rather that birth is scientifically explained by us in terms of an hypothetical observer being present all those years ago and along its time line there after, after all, all we have to explain the star is phenomena, so to be consistent I can’t extrapolate that phenomena to an arena within independent reality under the name of science (i.e. to a universe outside of empirical reality) - from this perspective of mine, a scientific model is solely a property of human experience and has to stay that way. So the time line of the star is one that only exists within empirical reality, the star does not have an intrinsic historical time line. So it can be an uncomfortable stance, but it’s one that seems to make a lot of sense to me and separates the proper scientific method (in terms of verified models within empirical reality) from what ever we call the mode of inquiry that attempts to investigate independent reality, given that the relationship between empirical reality (our reality) and independent reality (a reality outside of phenomena) is not a scientific one.

Of course such a standpoint confines science to accounting for empirical reality in terms of human experience rather than being able to explain independent reality. I guess such a standpoint is untenable to you, but for me it seems to be the only way forward in terms of what science seemingly can access. Having said that, I am always keen to see if there are grounds in which the scientific method can be shown to be valid in terms of its remit of testability within an arena of independent reality that by definition cannot include any notion of testability because testability can only be invoked by an observer and phenomena which immediately sets up the testability as occurring within empirical reality. But I guess I have already gone too far from the scope of this thread, I have only done so though to illustrate that there are means in which phenomena by itself can be properly dealt with by the scientific method, albeit in a manner of explaining human experience concerning empirical reality (phenomena) rather than explaining independent reality (outside of phenomena).

These issues are explored very comprehensively within the writings of Bernard d’Espagnat (“Conceptual Foundations of Quantum Mechanics”, “Veiled Reality” and “On Physics and Philosophy”). It is d’Espagnat’s strong and well worked out thesis that invokes a notion of unknowable independent reality in the context of an emergent (“emergent” of course not being associated with familiar terms of cause and effect) empirical reality of phenomena “from” independent reality He refers to this version of realism as Open Realism.
It is largely through his writings that I arrived my particular understanding of issues concerning realism, idealism and empiricism.

Incidentally, d’Espagnat was a close colleague of Bell at Cern, and some of the references you make concerning Bell arise within d’Espagnat’s books when he talks about how he and Bell discussed these issues generally, in fact it was d’Espagnat that instigated the Aspect correlation experiments when he was Professor of Physics at the University of Paris-Orsay. Needless to say they were at opposite ends over the realism debate, but they seemed to be good friends despite that! How I wish that he were following this forum, he perhaps could offer an insight into Bell's thinking that you touch upon so often!
 
  • #218
ttn said:
I therefore declare all the votes for "anti-realism" to be void, and hence the correct answer, "non-locality", to be the winner of the poll. :smile:
Seems very reasonable to me. :smile: By the way, I'm sure everyone appreciates your contribution and detailed posts. Irrespective of the "truth", I always seem to get depressed by reading pro-instrumentalism arguments who seem to consider physics to be the science of meter-readings. Physics in that way would be pretty boring. It kinda of reminds me of behaviourism in the cognitive sciences.
 
  • #219
ttn said:
I think I do understand it. For you, QM is *merely* a mathematical algorithm for generating statistical predictions. It is not actually a *physical theory* at all. I'm not sure that's the wrong way to understand "ordinary QM".

I think there are several equally valid ways to understand it and my way is just one of them.

Physics is physics, not math. Surely it must be the end goal always to say what the world is like. So if you have some mathematical statistics-generating algorithm that really truly says *nothing* about the physical world, that is totally inadequate.

Physics is always expressed using math and thus every physical law is just a string of symbols, independently of whether it predicts outcomes or just statistics. The theories might differ in how much they say about the physical world, but i think it's wrong to say that statistical predictions say nothing about the physical world. They do say something; they just don't say everything. Newtons law of gravity also holds independently of whether the gravitating object is a point mass or a spherical object with an inhomogeneous radial mass distribution.

It may be perfectly useful to have it, but it is not a physical theory and I think any true physicist wants a satisfactory physical theory and won't be satisfied by anything less. Hence the search for theories (like Bohm's theory, GRWm/f, MWI) which actually tell (or, in the case of MWI, at least purport to tell) a coherent story about what the *world* is like physically -- a story which doesn't involve any shifty splits and which, at the end of the day, both produces recognizable macroscopic objects and gets the details right for the statistics of how often they should move this way and that.

That's a nice goal, but i think a physical theory can never tell us what the world is like. It's always just a model that explains aspects of the world. Some explain more aspects of the world and some explain less. None explain all and if they did, then there would probably be equivalent models with entirely different ontologies, so the models could still not tell us with certainty what the world is like. For example Bohmian mechanics might be more pleasing to you, but it makes exactly the same predictions as ordinary QM (as far as i know), so we can never know which of these is right.

ttn said:
Just as one final thought on the original topic of the thread, I hope people who voted for "anti-realism" in the poll will make sure not to miss my post #204 in which I sketch a mathematically rigorous version of the EPR argument *from locality to* what (I think) people who voted "anti-realism" mean by "realism". Clearly, just as a matter of sheer elementary logic, anybody who thinks that we can elude the spectre of nonlocality by denying (this) "realism", has something pretty serious to think about there. I will note also that, despite a couple of half-hearted attempts, nobody rose to the challenge of showing how the perfect correlations (observed in the usual EPR-Bell scenario when a=b) can be explained by a local but non-realist model. From the point of view of the theorem in #204 this is of course not surprising: "realism" (meaning here deterministic non-contextual counterfactually-definite hidden variables) is the *only* way to explain these particular correlations locally. The correlations and the assumption of locality *logically entail* "realism". That is what that little mini-theorem says.

I therefore declare all the votes for "anti-realism" to be void, and hence the correct answer, "non-locality", to be the winner of the poll. :smile:

I think that's not a fair way to end the discussion. After all, you just said that the instrumentalist viewpoint might not be "the wrong way to understand ordinary QM". I think that if you take ordinary (instrumentalist) QM and give beable status to only the statistical properties it predicts (including the correlations), then you can formally check Bell's locality criterion (it's just a formal mathematical criterion that can be formally applied to any theory, independent of whether you classify it as physical or not) and it would turn out that instrumentalist QM obeys it. So instrumentalist QM does classify as a Bell-local, non-realistic model that explains the correlations. So in the end, whether there exists such a theory depends on whether you accept individual outcomes as beables or not. There is no mathematical reason that prevents us from applying the Bell-criterion to a theory, which doesn't have individual outcomes as beables and instead gives this status to statistical properties.




--

In the end, i also want to thank you for the discussion. I've also learned something and i will definitely try on a piece of paper, whether the Bell-locality criterion applied to instrumentalist QM classifies it as local. That would probably be one of the coolest things I've come across in the last months.
 
  • #220
rubi said:
I think that's not a fair way to end the discussion. After all, you just said that the instrumentalist viewpoint might not be "the wrong way to understand ordinary QM". I think that if you take ordinary (instrumentalist) QM and give beable status to only the statistical properties it predicts (including the correlations), then you can formally check Bell's locality criterion (it's just a formal mathematical criterion that can be formally applied to any theory, independent of whether you classify it as physical or not) and it would turn out that instrumentalist QM obeys it. So instrumentalist QM does classify as a Bell-local, non-realistic model that explains the correlations. So in the end, whether there exists such a theory depends on whether you accept individual outcomes as beables or not. There is no mathematical reason that prevents us from applying the Bell-criterion to a theory, which doesn't have individual outcomes as beables and instead gives this status to statistical properties.




--

In the end, i also want to thank you for the discussion. I've also learned something and i will definitely try on a piece of paper, whether the Bell-locality criterion applied to instrumentalist QM classifies it as local. That would probably be one of the coolest things I've come across in the last months.

Unfortunately, the first thing you'll write down on your paper is "P(A..." and then you'll realize that there's trouble, since "A" here refers to the actual outcome of an experiment -- something you've said isn't part of your instrumentalist version of QM at all. How can the probabilities, attributed by a theory to a certain event, satisfy (or even fail to satisfy) a certain mathematical condition, when according to the theory there is no such event?

Anyway, good luck, and thanks again for the enjoyable discussion.
 
  • #221
ttn said:
Unfortunately, the first thing you'll write down on your paper is "P(A..." and then you'll realize that there's trouble, since "A" here refers to the actual outcome of an experiment -- something you've said isn't part of your instrumentalist version of QM at all. How can the probabilities, attributed by a theory to a certain event, satisfy (or even fail to satisfy) a certain mathematical condition, when according to the theory there is no such event?

Anyway, good luck, and thanks again for the enjoyable discussion.

The beables are the statistical properties like probability distributions, mean values and so on. I will not start writing down P(A, but instead i will write down P(<A>|...) and then check whether the formal criteron is obeyed.
 
  • #222
rubi said:
The beables are the statistical properties like probability distributions, mean values and so on. I will not start writing down P(A, but instead i will write down P(<A>|...) and then check whether the formal criteron is obeyed.

Cool. But please describe this as "Rubi's formulation of locality", not Bell's, when you publish...
 
  • #223
ttn said:
Cool. But please describe this as "Rubi's formulation of locality", not Bell's, when you publish...

Why? You write in your own paper that for the locality criterion P(b_1|B_3 b_2) = P(b_1|B_3),

J. S. Bell's concept of local causality (Travis Norsen) said:
b_i refers to the value of some particular beable in space-time region i and B_i refers to a sufficient (for example, a complete) specification of all beables in the relevant region.

So if i choose my beables to be the statistical properties (instead of the outcomes as you do for Copenhagen), then i can formally apply this criterion to the theory, where b_1 = <A> and so on. I'm just using the general definiton and applying it to the special case of instrumentalist QM, where the beables are the statistical properties. This is precisely Bell's formulation of locality applied to the theory of QM with a particular choice of beables. It's not Rubi's formulation.


P.S.: I know that as a convinced Bohmian, you will say: "Nooo, the outcomes must be beables, because the world can't be without outcomes." But for someone who accepts that the world is "nothing but wave function", it is a perfectly valid viewpoint to claim that the beables are the statistical properties.
 
Last edited:
  • #224
Len M said:
The question as to how close empirical reality is to independent reality is an untestable one, so I tend to stay on the side of caution – a miss is as good as a mile, I can’t see the point of assuming a degree of closeness, as if perhaps we only need to concern ourselves with the mechanistic alteration to the “thing in it’s self” by the characteristics of the eye – that to me seems a bit of a cop out, it restores a comfortable feeling that what lay within independent reality is a rough approximation of phenomena...
But agreement with everything you wrote is not inconsistent with violation of Bell's implying non-locality. And I personally agree with pretty well everything you wrote.
 
  • #225
bohm2 said:
But agreement with everything you wrote is not inconsistent with violation of Bell's implying non-locality. And I personally agree with pretty well everything you wrote.

I only made the post in terms of a very small part of ttn’s overall important contribution to this thread, namely when he said:

ttn said:
My point is just: if you think we can get any useful information at all about the external world from our senses (and I certainly do), then surely this will have to include basic facts like that there is a 3D world full of stuff that moves around and interacts and that includes things like little flag-shaped hunks of material that sometimes pop up and down. My view is that, if you regard that as even-possibly-mistaken, then you are never going to get anything remotely resembling empirical science off the ground; certainly, if such things "might be wrong", then *literally everything we have ever taken as empirical evidence for anything in science ever* "might be wrong", and then, well, we're totally at sea.
As I said in my post, I see nothing at all wrong in simply accepting that science (as an experimental discipline) belongs quite properly within phenomena. ttn seems to me to picking and choosing in an arbitrary manner between science as practiced within empirical reality (in terms of testability) and the extrapolation of those models to an independent reality that cannot (and does not) involve testability, without seemingly keeping track of what he is doing (at least not in a formal transparent manner that identifies the difference between the scientific status of a model in terms of empirical reality and the same model in terms of independent reality). It's easier for me to keep track of the mix between empirical reality and independent reality because I go the whole hog, I confine the scientific method to phenomena and I reserve the realm of independent reality as being unknowable in a scientific sense and having no correspondence to empirical models, but philosophically being free to conjecture about the nature (and importance) of its existence. For a less extreme stance though, it becomes more difficult to keep track, but I think you have to and be quite transparent about it in public because there is no question that a mix is being invoked between the scientific method involving testability and the extrapolation of that model to a realm of independent reality that cannot involve testability.

But ttn then says
if you regard that as even-possibly-mistaken

implying that accepting the possibility that empirical reality (phenomena) is not close to independent (external) reality in some manner spells the end of science in that empirical science may all be “wrong”. I don’t see that at all, empirical science is always going to be “right” within empirical reality (in the sense of mathematical predictive models within their domain of applicability) and for me that fact is one of the most remarkable aspects of the scientific method – Newton’s predictive mathematical model, within its domain of applicability, is going to be valid ten thousand years from now, that for me has got enough solidity to more than compensate for being (as ttn says) “totally at sea” because we can't scientifically prove that empirical models have the same applicability within independent reality.

The extract from ttn seems to be something said from the "heart" with conviction and I wondered whether it had any specific relevance to his science as opposed to his philosophical stance. I guess I’m not going to know for sure now that ttn is back to teaching, but I certainly agree with you when you say
But agreement with everything you wrote is not inconsistent with violation of Bell's implying non-locality

so perhaps that would also be the viewpoint of ttn?
 
Last edited:
  • #226
Correct me if i am wrong, but the fundamental constituent of reality are not inadequate classical concepts like 'particle' and 'wave', but information. We are not seeing particles, but always seeing information about particles. The brain is not just a simple collection of particles(as Newtonain perspective would dictate), but an(emergent) information processor. At the rock bottom of things, we are not seeing tables and chairs but information about tables and chairs and being such, information has no obligation to be material-like, corpusular-like ot classical-like. While there could be a stunning correspondence between tables and our sensation of tables, we should not overlook the simple fact that we only have access to the information about tables, not the tables themselves. Tthe ultimate nature of tables is not accessible, hence it is not a valid scientific question. I totally agree with Bohr, it's only what we can say about Nature, not what or how Nature is. It's surprizing that we have as good models of reality as we do, even if they fail to makes sense at certain scales.
 
Last edited:
  • #227
Maui said:
Correct me if i am wrong, but the fundamental constituent of reality are not inadequate classical concepts like 'particle' and 'wave', but information. We are not seeing particles, but always seeing information about particles. The brain is not just a simple collection of particles(as Newtonain perspective would dictate), but an(emergent) information processor. At the rock bottom of things, we are not seeing tables and chairs but information about tables and chairs. While there could be a stunning correspondence between tables and our sensation of tables, we should not overlook the simple fact that we only have access to the information about tables, not the tables themselves. Tthe ultimate nature of tables is not accessible, hence it is not a valid scientific question. I totally agree with Bohr, it's only what we can say about Nature, not what or how Nature is.

Yes I think I would agree very much with what you say in that you seem to be placing phenomena as the only entity in which we have access to and it is within that framework that we use the scientific method with spectacular success - why should we ask any more of such a successful method in wanting it to be applicable in the same manner to a realm outside of phenomena where the very essence of the scientific method, namely testability cannot be carried out?

My only difference perhaps would be that I do see a need for "something" outside of phenomena from which empirical reality "emerges" (in an unknowable manner) otherwise we have to adopt solipism or radical idealism. I think the consistencies we all observe as phenomena (and agree on) depend on something other than ourselves, so in this sense I am a realist, it's just that I don't see that we can access my "something" that "exists" within independent reality (i.e outside of phenomena) in any scientific sense (at least not as I understand the scientific method in terms of the method requiring a notion of testability).
 
Last edited:
  • #228
Len M said:
As I said in my post, I see nothing at all wrong in simply accepting that science (as an experimental discipline) belongs quite properly within phenomena. ttn seems to me to picking and choosing in an arbitrary manner between science as practiced within empirical reality (in terms of testability) and the extrapolation of those models to an independent reality that cannot (and does not) involve testability, without seemingly keeping track of what he is doing...

I think there is a contrast between applied science, the basic research that underlies technology, and pure science, which I think has some kind of understanding as the goal. When you're trying to build a better bridge, or better electronics, or whatever, there really is a sense that you don't need to understand anything, you just need to know reliable rules of the form "In situation S, if you do X, you'll get result Y with probability Z". By this practical criterion for science, there is nothing wrong with describing the orbits of the planets or the energy levels of hydrogen, or the relationship between velocity and kinetic energy as an infinite series, all of whose coefficients are empirically determined. So the Ptolemy scheme for describing planetary motion, with its spheres within spheres within spheres, is really perfectly fine, and Balmer's formula for computing energy levels is perfectly fine. Explaining the null results of the Michelson-Morley experiment by an ad hoc velocity-dependent length contraction and time dilation is perfectly. There is no practical need for fundamental theories, at all.

But there is another kind of science that considers the job not to be done when you have a formula that empirically works pretty well. Some kinds of people are bugged by arbitrariness, by lots of parameters whose values seem meaningless. They prefer to try to understand how those successful formulas come about, why the parameters are what they are. They would like an understanding of the principles involved. Even though we may never experience gravity billions of times stronger than on the Earth, they want to be able to have an idea of what things would be like in those circumstances.

It's really hard to make a decisive partition of science into what's practical and what's pure, because a lot of science that was once considered a matter of intellectual curiosity ended up having practical applications. However, I think that the divorce between practical physics and pure physics has happened, and many of the new discoveries and ideas since maybe the 60s (quantum chromodynamics, supersymmetry, loop quantum gravity, string theory, Hawking radiation, the holographic principle, quark theory, etc.) will likely have no practical applications for decades, if ever.

So to me, it's pretty weird to talk about fundamental physics in purely instrumental terms: All we care about is a way of calculating probabilities for the outcomes of experiments. WHY? Why do you care about a way of calculating probabilities for the outcomes of experiments? If the experiment takes a multi-billion dollar collider to take place, then who cares? Knowing the answer has no practical purpose, it seems to me. If all you care about is the pragmatics of predicting what happens when we perform specific experiments, then fundamental physics is over, it seems to me.
 
  • #229
stevendaryl said:
[..] If all you care about is the pragmatics of predicting what happens when we perform specific experiments, then fundamental physics is over, it seems to me.
I agree; regretfully that was the paradigm for the last century, it seems.
 
  • #230
rubi said:
Why? You write in your own paper that for the locality criterion ...

Travis channels Bell. :smile: So he can present anything as being what Bell says, and you cannot.
 
  • #231
Len M said:
so perhaps that would also be the viewpoint of ttn?
That's a good question and I'm not sure? But my gut hunch is that ttn would not agree with the Kantian and/or epistemic structural realist position that I think both you (if I'm understanding you) and myself seem to subscibe to but who knows?
 
  • #232
While many of these have been mentioned on various threads/posts I thought I'd post a list of the major papers I've come across arguing that violations of Bell's inequality implies non-locality, irrespective of any other issues (e.g. realism, determinism, hidden variables, pre-existent properties, etc.):

Bertlmann’s socks and the nature of reality
http://cds.cern.ch/record/142461/files/198009299.pdf

J.S. Bell’s Concept of Local Causality
http://chaos.swarthmore.edu/courses/Physics113_2012/002.pdf

Local Causality and Completeness: Bell vs. Jarrett
http://lanl.arxiv.org/PS_cache/arxiv/pdf/0808/0808.2178v1.pdf

Non-Local Realistic Theories and the Scope of the Bell Theorem
http://arxiv.org/ftp/arxiv/papers/0811/0811.2862.pdf

The uninvited guest: ‘local realism’ and the Bell theorem
http://philsci-archive.pitt.edu/5258/1/The_uninvited_guest__'local_realism'_and_the_Bell_theorem.pdf

A Criticism of the article "An experimental test of non-local realism"
http://arxiv.org/abs/0809.4000

John Bell and Bell's Theorem
http://www.mathematik.uni-muenchen.de/~bohmmech/rt/bbt.pdf

What Bell proved: A reply to Blaylock
http://www.stat.physik.uni-potsdam.de/~pikovsky/teaching/stud_seminar/Bell_EPR-2.pdf

Not throwing out the baby with the bathwater: Bell’s condition of local causality mathematically ‘sharp and clean’
http://mpseevinck.ruhosting.nl/seevinck/Bell_LC_final_Seevinck_corrected.pdf

Can quantum theory and special relativity peacefully coexist?
http://mpseevinck.ruhosting.nl/seevinck/Polkinghorne_white_paper_Seevinck_Revised3.pdf

What is the meaning of the wave function?
http://www.fyma.ucl.ac.be/files/meaningWF.pdf

The Message of the Quantum?
http://www.maphy.uni-tuebingen.de/members/rotu/papers/zei.pdf

Was Einstein Wrong? A Quantum Threat to Special Relativity
http://www.stealthskater.com/Documents/Quantum_01.pdf
 
Last edited by a moderator:
  • #234
bohm2 said:
While many of these have been mentioned on various threads/posts I thought I'd post a list of the major papers I've come across arguing that violations of Bell's inequality implies non-locality, irrespective of any other issues (e.g. realism, determinism, hidden variables, pre-existent properties, etc.): ...


Unfortunately, all these papers include the hidden assumption that individual experimental outcomes correspond to some element of the theory of quantum mechanics. They either fail to understand the difference between values that come from the theory and values that are determined by experiment or they secretly use a non-standard theory of quantum mechanics (standard QM supplemented by a mechanism that can in principle predict individual outcomes; everyone knows that this is not the case in the standard theory) and claim it would be the standard theory.

Bell's criterion actually does capture our intuitive understanding of locality after all. You can for example apply it straightforwardly to any classical theory and it captures what we would consider locality of a classical theory. However, these papers apply it to a quantum theory without acknowledging that fact that the quantum theory isn't a classical theory anymore doesn't have something like trajectories of observables anymore (unlike for example Bohmian mechanics) and thus you can't check the criterion for them. You have to check it for the variables of of the quantum theory (or better: a subclass of them, called the "beables") instead. The word "beable" is assigned to those elements of the theory that correspond to what the theory claims to be physically real. In a classical theory or in Bohmian mechanics, the beables would be things like position. Standard quantum mechanics is basically a theory that describes the evolution of probability, so you would choose the beables to be the probability distributions. Notice that even if you wanted to, you couldn't choose position as a beable, because it isn't an element of the standard theory at all. Locality is a property of a theory, so you must apply the criterion to the theory alone without any supplements. So in the end, Bell's locality criterion is actually really good, but applied in a wrong way. It's just that all the generality and terminology involved makes it quite hard to understand what's wrong with the argument.

So what these papers actually prove is that if your theory assumes reality, which means that it does account for the individual outcomes of the experiment, then you can prove that it must be non-local. You can read the proof in ttn's post #204. However, you must note that this proof only holds if your theory really accounts for the outcomes. So the reality assumption ("the theory does account for individual outcomes") implies Bell-non-locality. If your theory is non-real ("it doesn't account for the individual outcomes of the experiment"), then it is still open, whether the it is local or non-local.

If you take standard QM serioursly (that means you accept that it doesn't account for individual outcomes), then Bell's locality criterion actually implies locality, whenever the no communication theorem holds.
 
  • #235
rubi said:
Standard quantum mechanics is basically a theory that describes the evolution of probability, so you would choose the beables to be the probability distributions.
Probability of "what"?
 
  • #236
bohm2 said:
Probability of "what"?

The probability to measure a given value for an observable, just as every standard textbook says. But the value itself isn't included in QM, only it's probability distribution. There is no prediction about concrete values. QM just says that the statistics of the measurement is given by the probability distribution. There is no underlying "real" observable that has a particular value.

To say it as briefly as possible: These above papers prove that if a theory can account for the individual outcomes of the experiment, then it must be non-local. Standard QM doesn't do it, so it can be a local theory.
 
Last edited:
  • #237
rubi said:
The probability to measure a given value for an observable, just as every standard textbook says. But the value itself isn't included in QM, only it's probability distribution. There is no prediction about concrete values. QM just says that the statistics of the measurement is given by the probability distribution. There is no underlying "real" observable that has a particular value.
Observable of what? If I'm understanding you (I may not be) this has been considered:
Muller (1999) stresses that no space-time formulation of quantum mechanics is as of yet available—thus it can not be regarded a spacetime theory—, and that it is a hard job to formulate one, be it in Minkovskian or Galilean spacetime. However, despite being true, this is not relevant for the problem here. All that is needed to consider the question of local causality are predictions for measurement outcomes at certain space-time locations as in Fig. 3 (see Appendix), and quantum mechanics does give such predictions when the measurements and the state to be measured are specified. It does not matter that the theory itself cannot be taken to be a spacetime theory on some appropriate differentiable manifold.
Can quantum theory and special relativity peacefully coexist?
http://mpseevinck.ruhosting.nl/seevinck/Polkinghorne_white_paper_Seevinck_Revised3.pdf
 
  • #238
bohm2 said:
Observable of what? If I'm understanding you (I may not be) this has been considered: ...

An observable like position or spin. That above paper also uses the individual outcomes as input for Bell's criterion and thus the same argument applies.

(By the way, he is even wrong with the statement that relativistic QFT presupposes locality. In fact, it is a framework that provides some general theorems under the assumption of locality. Whether a concrete theory satisfies it or not always has to be checked.)
 
Last edited:
  • #239
rubi said:
To say it as briefly as possible: These above papers prove that if a theory can account for the individual outcomes of the experiment, then it must be non-local. Standard QM doesn't do it, so it can be a local theory.
What is your definition of locality?
 
  • #240
bohm2 said:
What is your definition of locality?

The Bell local causality condition P(b_1 |B_3 b_2) = P(b_1 | B_3), where b_i, B_i are beables / sets of beables in some regions of spacetime (if you want to know which regions of spacetime, see for example ttn's paper "J. S. Bell's concept of local causality", there's a picture).
 
  • #241
DrChinese said:
Relational BlockWorld is local. I consider it non-realistic.
How about this model? One of the papers just came out today. The author argues that it is local and makes all the predictions of QM:
But, by combining Richard Feynman’s formulation of quantum mechanics with a model of particle interaction described by David Deutsch, we develop a system (the “space of all paths,”- SP) that (1) is immediately seen to replicate the predictions of quantum mechanics, has a single outcome for each quantum event (unlike MWI on which it is partly based), and (3) contains the set λ of hidden variables consisting of all possible paths from the source to the detectors on each side of the two-particle experiment. However, the set λ is nonmeasurable, and therefore the above equation is meaningless in SP. Moreover, using another simple mathematical expression (based on the exponentiated-action over a path) as an alternative to the above equation, we show in a straightforward argument that SP is a local system.
Failure Of The Bell Locality Condition Over A Space Of Ideal Particles And Their Paths
http://lanl.arxiv.org/ftp/arxiv/papers/1302/1302.5418.pdf

Bell inequalities and hidden variables over all possible paths in a quantum system
http://lanl.arxiv.org/ftp/arxiv/papers/1207/1207.6352.pdf

The Space of all paths for a quantum system: Revisiting EPR and BEll's Theorem
http://lanl.arxiv.org/ftp/arxiv/papers/1109/1109.6049.pdf

What is interesting is the author's argument is similar to rubi's, I think (?), but he arrives at it using a different model:
The interesting thing, though, is that all proofs of Bell’s theorem (his original arguments and those by others in the same vein) for two entangled particles involve a probability distribution. This means that there is indeed a hidden premise, a tacitly assumed “X”—namely, that the underlying space for a quantum system is measurable. In other words, if we choose “X” to be “measurable” then in Maudlin’s formula we have the proposition, “No local, measurable theory can make The Predictions for the results of experiments carried out very far apart.” We consider Bell’s simple proof of this specific proposition (that is, when “measurable” is substituted for X) to be obviously valid.
 
Last edited:
  • #242
bohm2 said:
How about this model? One of the papers just came out today. The author argues that it is local and makes all the predictions of QM:

Failure Of The Bell Locality Condition Over A Space Of Ideal Particles And Their Paths
http://lanl.arxiv.org/ftp/arxiv/papers/1302/1302.5418.pdf

Bell inequalities and hidden variables over all possible paths in a quantum system
http://lanl.arxiv.org/ftp/arxiv/papers/1207/1207.6352.pdf

The Space of all paths for a quantum system: Revisiting EPR and BEll's Theorem
http://lanl.arxiv.org/ftp/arxiv/papers/1109/1109.6049.pdf

What is interesting is the author's argument is similar to rubi's, I think (?), but he arrives at it using a different model:

Thanks for these references. They are fascinating, and very exciting. However, in
"Bell Inequalities And Hidden Variables Over All Possible Paths In A Quantum System" (http://lanl.arxiv.org/ftp/arxiv/papers/1207/1207.6352.pdf), the author says something false, which already came up in this thread:
It seems surprising that no one until now has noticed the hidden premise of measurability in Bell’s definition of locality

As I pointed out, Pitowky and others came up with counterexamples to Bell's Theorem that exploited nomeasurability of the space of hidden variables. Pitowsky's model was very ad hoc, and Leffler's model seems much more natural and physically meaningful, but it's false to say that nobody had looked at nonmeasurability before.
 
  • #243
bohm2 said:
What is interesting is the author's argument is similar to rubi's, I think (?), but he arrives at it using a different model:

The problems with measurability that i mentioned earlier don't apply to the argumentation of ttn, because Bell locality doesn't require something like a translation-invariant measure on the space of wave-functions. It's only needed if you want to rule out all theories with huge hidden-variable spaces using Bell's theorem. That's why the counterexamples stevendaryl mentioned can work.

My latter argument doesn't require any fancy math. I just argue that standard QM doesn't account for individual outcomes of measurements an thus they can't be beables of the theory. Ttn's proof (post #204) of non-locality however requires individual outcomes to be beables and thus it can't be applied to standard QM in this way. The beables are the probability distributions instead and if you apply Bell's condition to them, it reduces to the statement of the no communication theorem, so the theory is Bell local whenever the no communication theorem holds.
 
  • #244
rubi said:
The problems with measurability that i mentioned earlier don't apply to the argumentation of ttn, because Bell locality doesn't require something like a translation-invariant measure on the space of wave-functions. It's only needed if you want to rule out all theories with huge hidden-variable spaces using Bell's theorem. That's why the counterexamples stevendaryl mentioned can work.

I have to say, though, that there is something philosophically screwy about nonmeasurable sets when you try to apply them to the real world.

Here's a very weird example: Suppose we have a game in which two people, Alice and Bob, generate random real numbers in the set \lbrace x\ \vert\ 0 \leq x \leq 1 \rbrace. (Imagine spinning a dial, and taking the resulting angle, divided by 2 \pi.) Beforehand, we pick a total ordering on the reals x \succ y (not the usual ordering). If Alice's number is a and Bob's number is b, then Alice wins if a \succ b. Otherwise, Bob wins.

Suppose that our two players are Alice and Bob. Alice generates her real, a, and looks at it, but doesn't tell Bob what it is. Based on the value of her real, she is allowed to place a wager on the game. She notices the following fact:

There are only countably many values b that would beat her number a.

She reasons that the probability of Bob generating a real number that lies in any countable set is rigorously zero. So almost certainly (with probability 100%), Alice will win the game. So she's justified in betting her life savings on the outcome.

However, Bob takes a look at his real, b and sees that there are only countably many values for a that would beat it. So, similarly, Bob is justified in betting his life savings on the outcome of the game.

Obviously, someone is not only wrong, but in a sense is infinitely wrong. The outcome that seemed almost certain didn't happen for one of them. Well, that's the breaks, sometimes things of measure zero happen. But they certainly shouldn't happen very often.

Well, it is mathematically possible to construct a total ordering \succ on reals so that absolutely every round of the game, either Alice or Bob will experience something of probability zero happening. That is, we can arrange it so that for every real x, there are only countably many values of y that would beat it.

Pitowsky's model uses exactly the same type of construction as the one that would produce the total ordering \succ. So there is something a little unsettling about it. For probabilities to behave the way we think they should, we need for things of probability zero to never happen (or practically never). But in Pitowsky's construction, there are events of probability zero that happen every single time.
 
  • #245
bohm2 said:
While many of these have been mentioned on various threads/posts I thought I'd post a list of the major papers I've come across arguing that violations of Bell's inequality implies non-locality, irrespective of any other issues (e.g. realism, determinism, hidden variables, pre-existent properties, etc.):

Bertlmann’s socks and the nature of reality
http://cds.cern.ch/record/142461/files/198009299.pdf

...

As rubi and morrobay point out, there are papers that come out the other way on the subject. I.e. that violations of Bell Inequalities indicate it is local non-realism that should be selected. Here is once example:

http://arxiv.org/abs/0909.0015

Abstract:

"It is briefly demonstrated that Gisin's so-called 'locality' assumption [arXiv:0901.4255] is in fact equivalent to the existence of a local deterministic model. Thus, despite Gisin's suggestions to the contrary, 'local realism' in the sense of Bell is built into his argument from the very beginning. His 'locality' assumption may more appropriately be labelled 'separability'. It is further noted that the increasingly popular term 'quantum nonlocality' is not only misleading, but tends to obscure the important distinction between no-signalling and separability. In particular, 'local non-realism' remains firmly in place as a hard option for interpreting Bell inequality violations. Other options are briefly speculated on. "
 
  • #246
DrChinese said:
As rubi and morrobay point out, there are papers that come out the other way on the subject. I.e. that violations of Bell Inequalities indicate it is local non-realism that should be selected. Here is once example:

http://arxiv.org/abs/0909.0015

Abstract:

"It is briefly demonstrated that Gisin's so-called 'locality' assumption [arXiv:0901.4255] is in fact equivalent to the existence of a local deterministic model. Thus, despite Gisin's suggestions to the contrary, 'local realism' in the sense of Bell is built into his argument from the very beginning. His 'locality' assumption may more appropriately be labelled 'separability'. It is further noted that the increasingly popular term 'quantum nonlocality' is not only misleading, but tends to obscure the important distinction between no-signalling and separability. In particular, 'local non-realism' remains firmly in place as a hard option for interpreting Bell inequality violations. Other options are briefly speculated on. "

The lack of separability in quantum mechanics is reflected in the fact that the wave function for more than one particle is not a function in 3 dimensional physical space, but a function in 3N dimensional configuration space. It's hard to know what "local" means for such a theory.

I don't know how significant this is, but in the Heisenberg picture, where the wave function is static and the operators evolve, all evolution is described by perfectly normal evolution equations involving ordinary 3D space plus time. So that is a sense in which the dynamics of quantum mechanics is perfectly local. Any nonlocality happens when you sandwich an operator between in- and out- states, which isn't something that takes place in time.
 
  • #247
  • #248
billschnieder said:
What do violations of Bell's inequalities tell us about nature?

Nothing.

http://neuron2.net/papers/bell.pdf

My feeling is that that paper is either wrong, or tautological. In neither case does it tell us anything about Bell's inequalities.
 
  • #249
stevendaryl said:
My feeling is that that paper is either wrong, or tautological. In neither case does it tell us anything about Bell's inequalities.

For what it's worth, billschnieder is a local realist. His reference does not meet the standards for PF. But as long as everyone knows it, considering this thread is essentially an opinion thread anyway, I guess it can't hurt.
 
  • #250
stevendaryl said:
My feeling is that that paper is either wrong, or tautological...
And you are never wrong or misguided. :rolleyes:
 
Back
Top