What Is an Element of Reality?

  • Thread starter Thread starter JohnBarchak
  • Start date Start date
  • Tags Tags
    Element Reality
Click For Summary
Laloe's exploration of "elements of reality" emphasizes the challenge of inferring microscopic properties from macroscopic observations, using a botanical analogy involving peas and flower colors. He argues that perfect correlations observed in experiments suggest intrinsic properties shared by particles, which cannot be influenced by external factors. The discussion highlights that these elements of reality must exist prior to measurement, as they determine outcomes regardless of experimental conditions. Critics challenge the analogy and the concept of hidden variables, questioning its validity and relevance to quantum mechanics. Ultimately, the debate centers on whether the existence of such elements can be scientifically substantiated.
  • #151
vanesch said:
But there is a problem in doing so. Indeed, in order to be able to apply a Born rule, you have to choose a preferred basis (the measurement basis). It is different if you apply the Born rule for "position" or for "momentum". It is different if you apply the Born rule in the z-spin basis, or in the x-spin basis.

If I understand correctly Bohmian mechanics - which I view as a MWI variant in a certain way, in that unitary evolution is also postulated without exception, even during a "measurement process" - then Bohmian mechanics solves the issues in the following way:

- the basis is postulated to be the position basis.

- there is a mechanism of assigning probabilities through the initial distribution of the "token" (the true particle positions, postulated to be initially distributed by the Born rule) and its associated dynamics (the guiding equation).

cheers,
Patrick.
 
Physics news on Phys.org
  • #152
vanesch said:
But there is a problem in doing so. Indeed, in order to be able to apply a Born rule, you have to choose a preferred basis (the measurement basis). It is different if you apply the Born rule for "position" or for "momentum". It is different if you apply the Born rule in the z-spin basis, or in the x-spin basis.

If I understand your point correctly, you are asking the question: why should the Born rule be applied for, eg, x-spin as opposed to, say, z-spin?

But it seems to me that you may as well ask: why should the Born rule be applied for a measurement on the electron going through the SG apparatus, as opposed to some measurement on some *other* electron. The answer, of course, is that this "other" electron was not fed into the SG apparatus. Likewise, the reason that the Born rule is applied for x instead of z-spin is that the SG apparatus was set up to measure x-spin, not z-spin.

This makes me think about a way that I decided long ago to conceptualize a typical Alice-Bob EPR experiment -- the delayed choice variety, that is. Suppose Alice is allowed to vary the orientation of her polarizer (or SG apparatus) at the last instant to any angle she wants. This is free choice, right? Well, I have always preferred to simplify things by removing any considerations of "consciousness," "free will," etc, as follows. Instead of putting the angle of the polarizer directly under the control of Alice's hands, and thus under the control of her "free-willed" brain, we instead put the angle under the control of a computer that determines the angle as a function of the input from a geiger counter. So going back to the earlier discussion, when you ask: why should the Born rule be applied to x-spin and not z-spin? We see that in this particular setup, the question: "which angle do we use when we apply the Born rule?" is answered the same way that we answer the question: "is the electron up or down?" In both cases, that is, the anwer to the question is not arbitrary: it is the product of measurement.

vanesch said:
But that's of course observer-related!

Yes, I completely agree with you here. (I am thinking of the SG apparatus as being, in a way, an extension of the observer.) As a slight aside, I have often thought that the choice of the observer in QM is sort of like the choice of a frame of reference in GR. IOW, any choice is "valid." But for the analysis of any given experiment, you have to make your choice and stick with it.

vanesch said:
The problem with this statement (which I endorse !) is that MWI says:
i d/dt psi = H psi AND THAT'S ALL. The original program (as I understood it) was that we could then derive all physical consequences from it, also the "observed" probabilities by observers. This was in a reaction to the Projection Postulate, which has a lot of problems, namely distinguishing "measurements" from "physical processes".

I think that I understand the purpose of original program in the same way. And I agree with you that it has not quite achieved that goal!

vanesch said:
In an MWI setting, the idea somehow is that "a real me" only observes ONE of these branches. So there must be some kind of splitting of "me", so that there is now a "me" that will "have measured" the plus component, and that will now work in the first branch and a "me" that will have measured the minus component and will now work in the second branch.
There are NO PROBABILITIES involved in this game. BOTH happen.

OK, I'm with you.

vanesch said:
What *I* am claiming is that in order for probabilities to arise (namely that I have 90% chance to be me+ and 10% chance to be me-), you HAVE TO MAKE AN EXTRA ASSUMPTION, and that extra assumption amounts to the Born rule:

Yes: there is an extra assumption. And Everett makes this extra assumption in his original program. But I do not agree that it is *necessary* to make this extra assumption (see below).

vanesch said:
I explicitly consider this a "measurement process" and assign probabilities to the outcomes according to the Born rule. The way I prefer to do it (as I tried to explain in my journal), is by saying that although there are now 2 different body states (nothing surprising, there are also 2 different electron spin states), my "consciousness" has TO CHOOSE, ACCORDING TO THE BORN MEASURE, which body state it will live in.

But the problem here, as I discussed in my earlier post, is that a significant measure of your "other conscousnesses" will conclude that the Born rule is false. It seems to me that this (postulating "my consciousness") does not quite address the issue.

Don't get me wrong. I can sort of see the motivation of saying "my consciousness follows the Born rule." But it seems desirable to me to avoid falling back on that sort of explanation if at all possible, and I think it is.

vanesch said:
The only way to introduce probabilities in a natural way is by "inhabiting" these terms with observers, according to a kind of probability measure, and the whole point of MWI, as I understood it, was that this probability measure would emerge "naturally" (by counting of something, say).

Yes, I think the difficulty that we are talking about can only be addressed in the way you just mention: we want the probability measure to arise *naturally* by counting something. The question is, what are we counting? Well the obvious answer is to count worlds, of course!

vanesch said:
Clearly, simply counting the terms doesn't work,

It does not work if we count terms (worlds) in the standard way, eg, a spin 1/2 measurement results in two worlds (two terms). Actually, it is interesting to point out that for a spin measurement, if p = 0.5, then counting the terms DOES work. In a more general sense, if we have a measurement with N outcomes, then counting the terms (worlds) DOES work if and only if the probability of each measurement outcome is equal to 1/N.

But why not try to *make* it work, via some sort of small modification of the standard workings of the MWI? That way, we could avoid making any sort of "extra" (perhaps metaphysical!) postulate. Why not?

David
 
  • #153
ZapperZ said:
Well it's about TIME you join in the fun, straycat! Took you long enough! :)

Zz.


Holy cow, Zz, 1771 posts you have :bugeye: ! it will take me quite some time to catch up!

straycat
 
  • #154
Three possibilities arise in the spin measurement experiments. Simply put they are:

1) The spins are correlated from the beginning and it is just the way they are measured that appears to be spooky. When they are measured they may be rotated to the measuring position. This implies they are within a hemisphere of the measuring position.Then the second particle is in the other hemishphere and can be rotated to the opposite position, or give a null measurement.

2). There is a field connection between the two particles (possible a potential scaler/vector/vector spin field) that allows FTL signals. Thus, when one is measured a signal (possible a spin wave or torsion wave signal) is sent to the other and this signal destoys the field connection.

3) There is, however, a third way to look at the problem. The two particles could really just be a single system (one particle), that breaks in two when measured (its wave function collapses to a two particle system). In this way the signal from one side of the system to the other is totally internal and FTL here may not violate relativity since the signal does not travel through space but through the internal structure of the single system.

juju
 
Last edited:
  • #155
Hey Patrick,

Let me see if I can flesh out what I meant by:

straycat said:
But why not try to *make* it work, via some sort of small modification of the standard workings of the MWI?

by focusing on something you said:

vanesch said:
The only way to introduce probabilities in a natural way is by "inhabiting" these terms with observers, according to a kind of probability measure, and the whole point of MWI, as I understood it, was that this probability measure would emerge "naturally" (by counting of something, say).

The first question here is "what are we counting?" According to the standard workings of the MWI, as I understand it, if a measurement result produces N distinct "worlds," then these N worlds are distinguished from one another by virtue of the fact that each of these N worlds corresponds to a physically distinct *observer* state. IOW, if the unitary evolution of the observer produces just one observer-state after some length of time, then there is no measurement and there is no "split." But if the unitary evolution of the observer takes one observer-state into N physically distinct observer-states, then we have effectively split into N "worlds." Therefore, in answer to the question: "what are we counting?" the answer is that we are counting the number of physically distinct observer-states (that evolve from a single observer-state, according to our unitary operator). Since we are counting physical states, I will call this a "physical measure" of our worlds.

So far, so good -- this jives so far with what Everett did, iiuc. But Everett's next step was to assign the Born rule-generated "probability measure" to each branch. What we would like to do is to throw this in the trash, and instead simply allow probability measure to emerge, as you say, *naturally* by simply *defining* "probability" as being equivalent to the physical measure.

Hold on, you say. That just doesn't work! Well, of course it doesn't, unless we do some tweaking. Let me give an example. Suppose we have a spin measurement with probability spin up p = 1/4, so (1-p) = 3/4. According to the standard workings of the MWI, the unitary operator takes our observer from a single state into two physically distinct states, one in which the observer has recorded "up", the other in which he has recorded "down." Let's imagin tweaking it like this: suppose that the unitary operator produces, not two physically distinct states, but rather four physically distinct states. In ONE of these, the observer has a physical record of up; in THREE of these, the observer has a physical record of "down." Of course, it needs to be determined in what way the three "down" observer-states are physically distinct. But I see no reason that this problem is insurmountable.

So to make an argument in favor of this approach, I would say that it achieves what Everett set out to do in the first place, but failed to do.

What is the argument *against* this approach?

David
 
  • #156
straycat said:
The first question here is "what are we counting?" According to the standard workings of the MWI, as I understand it, if a measurement result produces N distinct "worlds," then these N worlds are distinguished from one another by virtue of the fact that each of these N worlds corresponds to a physically distinct *observer* state. IOW, if the unitary evolution of the observer produces just one observer-state after some length of time, then there is no measurement and there is no "split." But if the unitary evolution of the observer takes one observer-state into N physically distinct observer-states, then we have effectively split into N "worlds."

The way I understood the motivation behind Everett's work was that he didn't want to introduce some special physics for what is an "observation" in contrast to "a physical process described by a hamiltonian", because that is the main difficulty in a Copenhagen-like view.
What I tried to point out earlier is that if you have a global quantum state |psi>, then this "splitting in N terms" is only possible if we CHOOSE to define some part of the system as being "the observer": we split the total Hilbert space into a product space H_observer x H_system_under_observation. But mind you that this is a completely arbitrary thing to do if there is no concept of what is an observer ! ONCE we have split Hilbert space in this arbitrary way, it is possible to apply Schmidt decomposition, and have |psi> written as a unique sum of product terms |obsstate> x |suo_state> in such a way that both state series are a basis in the respective product hilbert spaces.

Something funny now happens: It turns out that we now have to work with only ONE of these terms in the future, and that the probability of taking that term is given by its norm squared. That's essentially Born's rule.
We STILL need something of the kind in a MWI setting, otherwise there is no way to pick ANY term as the "observed one". This is the essentially probabilistic aspect of quantum theory which has to be imported in one way or another.
The simplest thing is to just POSTULATE it. I wasn't aware that Everett ever did that. I thought that he hoped that this would somehow EMERGE from the unitary evolution - indeed, Dewitt had an argument but which only works at infinite evolution time.
The problem is that in order to POSTULATE this rigorously, one has to define what are observers, and what are their associated hilbert spaces.
BUT IF WE DO THAT, THERE'S NO PROBLEM WITH COPENHAGEN EITHER !
And Everett just complicates matters without solving anything.

Therefore, in answer to the question: "what are we counting?" the answer is that we are counting the number of physically distinct observer-states (that evolve from a single observer-state, according to our unitary operator). Since we are counting physical states, I will call this a "physical measure" of our worlds.

So far, so good -- this jives so far with what Everett did, iiuc. But Everett's next step was to assign the Born rule-generated "probability measure" to each branch.

As I said before, I wasn't aware Everett did this! I thought he wanted somehow to have this measure EMERGE from the unitary evolution. I have to say the purpose of his program escapes me completely if he did postulate that.

Hold on, you say. That just doesn't work! Well, of course it doesn't, unless we do some tweaking. Let me give an example. Suppose we have a spin measurement with probability spin up p = 1/4, so (1-p) = 3/4. According to the standard workings of the MWI, the unitary operator takes our observer from a single state into two physically distinct states, one in which the observer has recorded "up", the other in which he has recorded "down." Let's imagin tweaking it like this: suppose that the unitary operator produces, not two physically distinct states, but rather four physically distinct states. In ONE of these, the observer has a physical record of up; in THREE of these, the observer has a physical record of "down." Of course, it needs to be determined in what way the three "down" observer-states are physically distinct. But I see no reason that this problem is insurmountable.

Well, this is cheating ! This IS nothing else but introducing the Hilbert norm as the probabilitiy measure, in a disguised way. But in that case you don't need the disguise: just postulate it. In order to do so, you need to specify what are observer subspaces and what are system subspaces. If you can do that, there is no problem with the von Neuman view, and Everett can go.

So to make an argument in favor of this approach, I would say that it achieves what Everett set out to do in the first place, but failed to do.

What is the argument *against* this approach?

That with all you need to bring in (define what are observers, as distinguished from what are physical systems under observation), there's no point in having Everett's program in the first place. If it is clear what are observers (people ? Conscient people ? Computers ? Printers on paper ? Memory cells ? Macromolecules ?) then there's no problem with von Neumann to be solved in the first place. Just let us keep the projection postulate then, it is easier.

The whole point was that we DIDN'T have to define what exactly was an observer (THE difficulty with von Neumann). But if we don't, you cannot specify your measure either.

cheers,
Patrick.
 
  • #157
vanesch said:
The way I understood the motivation behind Everett's work was that he didn't want to introduce some special physics for what is an "observation" in contrast to "a physical process described by a hamiltonian", because that is the main difficulty in a Copenhagen-like view.
What I tried to point out earlier is that if you have a global quantum state |psi>, then this "splitting in N terms" is only possible if we CHOOSE to define some part of the system as being "the observer": we split the total Hilbert space into a product space H_observer x H_system_under_observation. But mind you that this is a completely arbitrary thing to do if there is no concept of what is an observer ! ONCE we have split Hilbert space in this arbitrary way, it is possible to apply Schmidt decomposition, and have |psi> written as a unique sum of product terms |obsstate> x |suo_state> in such a way that both state series are a basis in the respective product hilbert spaces.

My understanding is that according to Everett, we do, as you say, have to CHOOSE some part of the system as being "the observer." But this is not problematic because we are NOT LIMITED in our choice of the observer. That is, we do not have to restrict ourselves to choosing "microchips" or "people" or "really smart monkeys" or any such thing; rather, ANYTHING can play the role of "the observer." This was the whole point of using the word "relative" in the phrase "relative state formulation;" the entire conceptual framework is built around calculating stuff *relative to a given observer*. You can't calculate anything relative to an observer if you don't first pick an observer.

In fact, the use of the word "relative" is similar in spirit to its use in GR. In GR, you can't talk about the length of an object without first specifying the frame of reference that you're working in. Therefore, you talk about the length of an object "relative" to your chosen FoR.

vanesch said:
The whole point was that we DIDN'T have to define what exactly was an observer (THE difficulty with von Neumann). But if we don't, you cannot specify your measure either.

Once again, it's just like GR. In GR, there is NO SUCH THING as a "preferred" or "privileged" or "special" FoR: they are all equally "valid." Likewise, in QM, there is NO SUCH THING as a "preferred/special/privileged" observer: ANY subsystem of a composite system is as "valid" an observer as any other. But if you want to calculate lengths of objects, you have to pick a FoR first; likewise, if you want to do a Schmidt decomposition, you have to pick an observer first.

In my mind, this is true for any version of QM: Copenhagen, Everett, whatever. Everett's contribution, I think, was that his relative state formulation does a better job than Copenhagen of illustrating the above point.

Just to restate the similarity to the GR-viewpoint, consider the following sentence from page 455 of Everett's original paper [1]: "To any arbitrarily chosen state for one subsystem there will correspond a unique *relative state* for the remainder of the composite system." Everything, therefore, is conceptualized RELATIVE to some subsystem-state, which we call "the observer." Note the word "arbitrary." Although he didn't say this -- he probably thought it was obvious, but he would have been wrong -- he could have phrased it "To any arbitrarily chosen state for *any arbitrarily chosen* subsystem ..." In GR, the choice of FoR is arbitrary, but that doesn't mean we don't choose one. So wha't wrong with choosing an observer in QM?

I must say that I learned to appreciate the MWI much more after reading Everett's original paper. And for the reasons that I gave above, I like the name "relative state formulation" better than "MWI." The sentence that I quoted above, and the paragraph from which I took it, are to me the most important sentence/paragraph in the entire paper. Like I said, I think that the entire notion of "relativity of states" is fundamentally inherent to the CI. The difficulty with the CI is just that Copenhagenists get caught up in trying to calculate how many neurons it takes to collapse the wavefunction, when in fact *any* arbitrarily chosen subsystem will work just fine.

More on probabilities, Born, etc later.

David

[1] Hugh Everett. "Relative State" Formulation of Quantum Mechanics. Reviews of Modern Physics. Vol 29, no 3, July 1957, pp 454 - 462.
 
  • #158
straycat said:
Well, I have always preferred to simplify things by removing any considerations of "consciousness," "free will," etc, as follows. Instead of putting the angle of the polarizer directly under the control of Alice's hands, and thus under the control of her "free-willed" brain, we instead put the angle under the control of a computer that determines the angle as a function of the input from a geiger counter. So going back to the earlier discussion, when you ask: why should the Born rule be applied to x-spin and not z-spin? We see that in this particular setup, the question: "which angle do we use when we apply the Born rule?" is answered the same way that we answer the question: "is the electron up or down?" In both cases, that is, the anwer to the question is not arbitrary: it is the product of measurement.

No, not really ! It is only the case if you consider the computer to be an "observer". But I can just as well consider it part of the system, and then the only thing I can say is that my computer is now in an entangled state with the x-spin state of the electron. If we prefer to write our hilbert space state in a basis which is a product of "computer states" and "spin states". But I'm free to choose any other basis in my H_computer x H_spin hilbert state. I'm not obliged to take a product basis, and I'm also not obliged to take the X-spin basis for the spin. Even if I work in a product basis, I can work with, say, the momentum states of the computer particles and the y-spin states of the electron. My state |psi> is perfectly expressible in that basis.
IT IS ONLY WHEN WE ASSIGN A SPECIAL STATUS TO THE COMPUTER that we want psi to be written in a series of terms such that each term contains ONE computer state of a single computer basis. (that's Schmidt decomposition !). But if the "computer" say, were just a photon, we wouldn't mind working in any other basis that suits us.
And the important point is to note that the application of any Born measure is DEPENDENT ON THE CHOICE OF BASIS WE MAKE.

Yes, I completely agree with you here. (I am thinking of the SG apparatus as being, in a way, an extension of the observer.)

See, you have to assign "special observer status" to something, here the SG apparatus.

As a slight aside, I have often thought that the choice of the observer in QM is sort of like the choice of a frame of reference in GR. IOW, any choice is "valid." But for the analysis of any given experiment, you have to make your choice and stick with it.

If it were so, there wouldn't be any issue to solve. The point is that the Born rule GIVES DIFFERENT OUTCOMES depending on your choice of basis !

But the problem here, as I discussed in my earlier post, is that a significant measure of your "other conscousnesses" will conclude that the Born rule is false. It seems to me that this (postulating "my consciousness") does not quite address the issue.

No, you didn't get my proposal. There is only ONE conciousness of "patrick". Each time, it has to choose which body state to inhabit. The other body states phyically evolve "normally" but only ONE possesses my consciousness.
There are now 2 ways to continue:

1) Solipsist: there is, in the whole universe only ONE SINGLE CONSCIOUSNESS: namely mine. After all, there is only ONE SINGLE PHYSICAL PROCESS I'm absolutely aware of to be a true observation: namely MY observations.
2) There are many consciousnesses out there, which each, independently, jump to their next body state by using the Born rule. So each time there is a split, only ONE is chosen. This means that most people I'm interacting with right now are bodystates which DO NOT have a consciousness. But their body, as a physical structure, will act in exactly the same way as if they were "inhabited".

So, the result, for myself, is the same: the bodystates of others I'm aware of are not conscious :-)

Indeed, there are many bodystates out there which, if they were inhabited by a consciousness, would observe disrespect of the Born rule. But they aren't, and as such, it doesn't even make sense to talk about their "world" because now there is nothing that requires that sums of product states be considered as separate worlds.

Don't get me wrong. I can sort of see the motivation of saying "my consciousness follows the Born rule." But it seems desirable to me to avoid falling back on that sort of explanation if at all possible, and I think it is.

I fully agree with you. However, given the CURRENT STATE OF AFFAIRS, I prefer the above picture, because at least it gives me a coherent view of quantum theory. As I've been repeating often here, it is just a story! But I cannot find any other, that strictly sticks to current quantum theory, assigns some ontology to the formalism (not just "shut up and calculate") and doesn't introduce extra *physical* assumptions which modifies QM predictions.
And it allows me to justify unethical behaviour towards other bodystates :-)))

Ok, this is not entirely true. Bohmian mechanics also allows for such a story. Only, there is too much "symmetry loss" to be paid to my taste: why should we stick to a lot of symmetries for the wave function part, and throw them all overboard to construct the guiding equation ?

cheers,
Patrick.
 
Last edited:
  • #159
vanesch said:
And the important point is to note that the application of any Born measure is DEPENDENT ON THE CHOICE OF BASIS WE MAKE.

Yes, just like the length of an object is dependent on the choice of frame of reference we make.

vanesch said:
The point is that the Born rule GIVES DIFFERENT OUTCOMES depending on your choice of basis !

Yes, it certainly does, just like GR gives different outcomes for length depending on the choice of frame!


vanesch said:
IT IS ONLY WHEN WE ASSIGN A SPECIAL STATUS TO THE COMPUTER ...
See, you have to assign "special observer status" to something, here the SG apparatus.

But we DON'T assign any special status to the computer or the SG apparatus, any more than we assign special status to whatever frame of reference that we used to solve a problem in GR.

Do you see the parallel I'm drawing here between "relativity" of states and general "relativity"? (This was the main point of my previous post.)

vanesch said:
So, the result, for myself, is the same: the bodystates of others I'm aware of are not conscious :-)

Gee, I'm feeling sort of woozy ...

David
 
Last edited:
  • #160
OK, I'm going through Everett's paper to see where probabilities are introduced. On page 460, he states: "In order to establish quantitative results, we must put some sort of measure (weighting) on the elements of a final superposition. ... We must have a method for selecting a typical element from a superposition of orthogonal states. We therefore seek a general scheme to assign a measure to the elements of a superposition of orthogonal states \sum_{i} a_{i} \phi_{i}. We require a positive function m of the complex coefficients of the elements of the superposition, so that m(a_{i}) shall be the measure assigned to the element \phi_{i}."

Everett then goes on to discuss standard requirements of probability measures (things like additivity requirements, normalization, etc), and he demonstrates that m is restricted to the form m(a_{i}) = a_{i} * a_{i}. So it's sort of made to look as if it couldn't have been any other way, that is, the probability measure MUST be given by the Born rule, and there is no other option.

So I suppose that Everett did not quite simply "assume the Born rule" outright. But it seems to me that he did the next closest thing: he assumed that the unitary evolution of the composite state is given by the familiar wave equation, and he furthermmore assumed that the probability measure of an eigenstate must be a function of its coefficient (and not a function of, I dunno, something else).

So to recap what I said a few posts back, the difficulty I see with this scheme is that the "physical measure" (as I defined a few posts ago) and the "probability measure" are not equal, and I would seek to find some sort of modification whereby they CAN be equated, along the lines of the "tweaking" that I suggested earlier. Perhaps this would require a different unitary operator in place of the Hamiltonian?

David
 
  • #161
straycat said:
Do you see the parallel I'm drawing here between "relativity" of states and general "relativity"? (This was the main point of my previous post.)

No, I don't, because in GR, when calculating the result of AN OBSERVATION, this result is independent of the frame in which you care to carry out its computation. But in QM, it IS DEPENDENT, if you consider "choice of the basis in which we apply the Born rule".

Let us look at it with an example.
Imagine I have a system S, which has a 3-dim hilbert state space.
Its basis can be |a>, |b> and |c>, but also, |1>, |2>, |3>, linked by a unitary base transformation.

Now imagine that I have an "observer" O which gets entangled with system S through a measure in basis {a,b,c}:

Before, we had:
|O_virgin> |OO_virgin> ( u1 |a> + u2 |b> + u3 |c> )
and after this "measurement" we have:

u1 |O_a> |a> + u2 |O_b> |b> + u3 |O_c> |c>

Now, "observer" OO gets entangled with system S in the 123 base:

as we had:
|a> = xa1 |1> + xa2 |2> + xa3 |3> etc...

we obtain:

u1 xa1 |O_a> |OO_1> |1> + u1 xa2 |O_a> |OO_2> |2> + u1 xa3 |O_a> |OO_3> |3>
+ u2 xb1 |O_b> |OO_1> |1> + u2 xb2 |O_b> |OO_2> |2> + u2 xb3 |O_b>|OO_3>|3>
+ u3 xc1 |O_c> |OO_1> |1> + u3 xc2 |O_c> |OO_2> |2> + u3 xc3 |O_c> |OO_3> |3>

But we could have written that in another way too, if we first decomposed according to OO and then according to O:

u1 xa1 |OO_1> (xa1* |O_a> |a> + xb1* |O_b> |b> + xc1* |O_c>|c>) +
u1 xa2 |OO_2> (xa2* |O_a> |a> + xb2* |O_b> |b> + xc2* |O_c>|c>) +
u1 xa3 |OO_3> (xa3* |O_a> |a> + xb3* |O_b> |b> + xc3* |O_c>|c>) +

u2 xb1 |OO_1> (xa1* |O_a> |a> + xb1* |O_b> |b> + xc1* |O_c>|c>) +
u2 xb2 |OO_2> (xa2* |O_a> |a> + xb2* |O_b> |b> + xc2* |O_c>|c>) +
u2 xb3 |OO_3> (xa3* |O_a> |a> + xb3* |O_b> |b> + xc3* |O_c>|c>) +

u3 xc1 |OO_1> (xa1* |O_a> |a> + xb1* |O_b> |b> + xc1* |O_c>|c>) +
u3 xc2 |OO_2> (xa2* |O_a> |a> + xb2* |O_b> |b> + xc2* |O_c>|c>) +
u3 xc3 |OO_3> (xa3* |O_a> |a> + xb3* |O_b> |b> + xc3* |O_c>|c>) +

= (u1 xa1 + u2 xb1 + u3 xc1) xa1* |OO_1> |O_a> |a>
+ (u1 xa1 + u2 xb1 + u3 xc1) xb1* |OO_1> |O_b> |b>
+ (u1 xa1 + u2 xb1 + u3 xc1) xc1* |OO_1> |O_c> |c>
+ (u1 xa2 + u2 xb2 + u3 xc2) xa1* |OO_2> |O_a> |a>
+ (u1 xa2 + u2 xb2 + u3 xc2) xb1* |OO_2> |O_b> |b>
+ (u1 xa2 + u2 xb2 + u3 xc2) xc1* |OO_2> |O_c> |c>
+ (u1 xa3 + u2 xb3 + u3 xc3) xa1* |OO_3> |O_a> |a>
+ (u1 xa3 + u2 xb3 + u3 xc3) xb1* |OO_3> |O_b> |b>
+ (u1 xa3 + u2 xb3 + u3 xc3) xc1* |OO_3> |O_a> |c>

Let us be clear: this state is identical to the previous state ! It is just another way of writing, here in basis |a> |b> |c> and the other one in basis |1> |2> |3>. There is physically no difference, and systems O and OO interacted in identical ways without system under study.

If we first assign "observer status" to O, then there are 3 probability measures, namely |u1|^2, |u2|^2 and |u3|^2, to be assigned to 3 "worlds" in which OO appears "entangled with a 1 - 2 - 3" state. If we then assign "observer status" to OO, we find an overall probability for O to have observed "a" and OO to have observed "1" of |u1|^2 |xa1|^2.
However, if we assign first "observer status" to OO, and then to O, then the overall probability of having O to have observed "a" and OO to have observed "1" equals |(u1 xa1 + u2 xb1 + u3 xc1) xa1* |^2, which is in general not the same as in the first case.

In von Neuman's approach, this is clear: because O and OO are incompatible measurements, first measuring O and then measuring OO is not the same as the opposite, because of the projection postulate. But in a MWI, where all is "entanglement", it matters in which basis we work ; if it were just a "point of view", the result shouldn't depend on it !

What we have done here is simply shown that the "Born measure" is different, for identical "observer states", according to whether we work in one or another basis.

cheers,
Patrick.
 
Last edited:
  • #162
straycat said:
So I suppose that Everett did not quite simply "assume the Born rule" outright. But it seems to me that he did the next closest thing: he assumed that the unitary evolution of the composite state is given by the familiar wave equation, and he furthermmore assumed that the probability measure of an eigenstate must be a function of its coefficient (and not a function of, I dunno, something else).

Yes, indeed, this is Gleason's theorem. But again, there IS an extra assumption, which you point out: that the probability measure is only function of the coefficient ; this is a property called non-contextuality.
But the very fact that you need this extra assumption, ABOUT A PROBABILITY MEASURE, kills the nice idea that from unitary evolution alone, you can deduce the probability measure in a natural way. You have to postulate its EXISTENCE before you can postulate any property about it (such as non-contextuality). That very existence of a probability measure kills (to my understanding) the original Everett program, because in order to postulate the existence of such a measure, you have to say WHEN you can apply it, which amounts to saying WHEN a physical system is a measurement system.

cheers,
Patrick.
 
  • #163
vanesch said:
Let us be clear: this state is identical to the previous state ! It is just another way of writing, here in basis |a> |b> |c> and the other one in basis |1> |2> |3>. There is physically no difference, and systems O and OO interacted in identical ways without system under study.

Oops, this is wrong what I wrote. Both states are not identical, so my example fails...

sorry about that.

cheers,
Patrick.
 
  • #164
vanesch said:
Yes, indeed, this is Gleason's theorem.
ahhh, cool.

vanesch said:
But again, there IS an extra assumption, which you point out: that the probability measure is only function of the coefficient ; this is a property called non-contextuality.
Yes, so it seems that Everett did, in fact, throw in an extra assumption: essentially, he (slightly indirectly) assumed the Born rule. I was thinking today about whether it would be possible to assume a *different* rule. For example, we could assume that the probability measure m is a function, not of the coefficient, but of the number of branches (ie, the number of base states) at any given split. I think that to satisfy the requirements of a probability measure, we simply need each probability measure m to be a real valued number in [0,1], and we want the sum of the measures at any given "branching point" to equal 1. So we could, for example, assume that each trajectory gets followed with probability p = 1/N. This, to me, is the "natural" way for probabilities to emerge (it is basically equivalent to the "physical measure" I mentioned earlier).

The problem with setting m = 1/N, of course, is that it does not agree with experiment! But my point is that there is no *theoretical* reason we couldn't do it. I suppose that to make the scheme work with experiment, we would need a different unitary operator, ie one that takes one observer-state into N observer-states in a different fashion than the standard Hamiltonian.

vanesch said:
That very existence of a probability measure kills (to my understanding) the original Everett program, because in order to postulate the existence of such a measure, you have to say WHEN you can apply it, which amounts to saying WHEN a physical system is a measurement system.
But it seems to me quite clear when you apply it: you apply it at the very instant that the observer-state becomes entangled with the system-under-observation state.

This notion seems to me to be related to what I was trying to say earlier about the word "relativity" meaning the same thing in "relative state formulation" and "general relativity." Let me see if I can clarify this with an example. Suppose we are doing a simple EPR experiment: we have a pair of entangled, unpolarized electrons e_A and e_B emitted in opposite directions so that their spins are measured by Alice and Bob, respectively. Alice will measure the x-spin, and Bob the y-spin. Their SG apparati are equidistant from the emission site, and situated a distance L from one another. Once Alice observes the spin state of e_A, she immediately signals Bob with the result using a beam of light that encodes the result. Bob does likewise for Alice.

So the question is: WHEN do you apply the Born rule? I claim that to answer this question, you FIRST have to pick an observer. So let's say we pick Alice. Alice becomes entangled with the spin state of e_A at the instant that e_A interacts with Alice's SG apparatus. It is not until some time T = L/c later that she receives Bob's light signal; thus, she becomes entangled with the spin state of e_B AFTER she becomes entangled with e_A. So if we were to draw the tree-diagram (or whatever you call it) that tells us when worlds split, then you would see that it FIRST splits into two branches corresponding to e_A=up and e_A=down, and THEN, at an amount of time T later, each of these branches splits further into two more branches corresponding to e_B=up and e_B=down.

Let's say that we decide to pick Bob instead of Alice as the observer. By symmetry, the tree diagram will look the same, except that the order of the splitting is reversed: in this case, the first split corresponds to the measurement of e_B, and the second split corresponds to the measurement of e_A.

So the point is that WHEN you apply the Born rule is RELATIVE to the observer. Once you have picked the observer, there is no ambiguity. In this respect, I think that Everett has achieved what he set out to do.

David
 
  • #165
straycat said:
So the question is: WHEN do you apply the Born rule? I claim that to answer this question, you FIRST have to pick an observer.

I agree with you. The problem is in the "picking of an observer". What is an observer, and what is not ?
That's how I'm led to talk about conciousness and things like that, because otherwise you have to specify physical interactions and systems which classify as "observer" and others which classify as "physical systems" with a hamiltonian. Once you feel free to do so, however, there is no problem with von Neumann either ! But as in the current state of affairs, there is no indication of what is the physical distinction between an "observer" (something, apparently which doesn't support to be in an entangled state with the rest of the world and has to "pick a branch" to "live in", instead of just happily assuming its entangled state like all good electrons are doing), and a physical system with a hamiltonian. So the very thing that "picks branches" and "lives in it" must be something quite peculiar, not a physical process, and in fact only a subjective experience, because from the outside, ALL physical systems happily get into entanglement and have hamiltonians (unless this will turn out not to be true, for instance in gravity).
Now something that is based upon "subjective experience", "lives in" etc... and is not physically observable from the outside makes me think a lot of "consciousness". But it doesn't matter how we call it or what it is ; only SOMETHING must qualify as "observer", and must be associated to a physical structure (body?).
ONCE you do that, however, von Neumann is OK, no ?

cheers,
Patrick.
 
  • #166
vanesch said:
I agree with you. The problem is in the "picking of an observer". What is an observer, and what is not ?
That's how I'm led to talk about conciousness and things like that, because otherwise you have to specify physical interactions and systems which classify as "observer" and others which classify as "physical systems" with a hamiltonian.
...
But as in the current state of affairs, there is no indication of what is the physical distinction between an "observer" (something, apparently which doesn't support to be in an entangled state with the rest of the world and has to "pick a branch" to "live in", instead of just happily assuming its entangled state like all good electrons are doing), and a physical system with a hamiltonian.

But why do you persist in trying to divide the world this way, into things that do and do not "support to be in an entangled state with the rest of the world"? There is no such distinction. ANYTHING can play the role of observer, and ANYTHING can play the role of being in an "entangled state."

There are two main issues we have been talking about in this thread.

1) What physical objects qualify as "observer," and what do not? I claim that any physical object is a valid choice for either role. Therefore, there is no need to postulate "consciousness" or any such thing as a distinguishing property of the former.

2) The second issue has to do with assigning the probability measure m = a*a to each branch of the tree diagram. Does this issue relate to your postulating "consciousness"?

David
 
  • #167
Definition of probability

It seems to me that if we want to define a "probability measure," we need to define what probability *is*. The best way, imho, is to make it an observable. Here is how it might be done in general terms:

Suppose that we have a system in a state |x> which we know from experience can evolve, under set experimental conditions, into one of I states, |x_i>, with i being an integer in [1, 2, ..., I]. For example, a spin-1 particle, when put through a SG apparatus, can evolve into one of three states: +1, 0, or -1 (so we have I = 3). We want to know: what is the "probability" associated with each of these three outcomes?

The way we do this in practice is to prepare N identically-prepared systems |x>, do the experiment N times, count up the number of times n_i that we observe the |x_i> outcome, and say that the probability of |x_i> is p_i = n_i / N. Theoretically, we do this for N = infty, although practically, we just do this for some large finite N.

So now let's go back to the "physical measure" of the number of worlds that we get via the MWI. For a given I and N, we end up with I^N worlds. Our goal is to define a "probability measure" m_i that we can associate with each state |x_i>, and we will use it as a way to predict p_i. Once we find a way to calculate m_i, we'll call it "straycat's rule" :wink: (in place of the Born rule). I claim that *our goal* is to define m_i in such a way that each individual observer, at the end of a large number of measurements, will conclude that "straycat's rule" is correct: that is, that the predicted value m_i equals the observed value p_i. Actually, I can't say *each* observer. What I really want is for the **physical measure** of observers who conclude that straycat's rule is false to approach zero in the limit of a large number of measurements.

I'm pretty sure that it wouldn't be too difficult to show mathematically that the only way to define m_i with this property is to set m_i equal to 1/I. So this is "straycat's rule".

So to return to the spin-1 example, this means that each outcome, +1, 0, or -1, is associated with probabiity 1/3. This does not agree with experiment, so straycat's rule doesn't work! There are two ways to fix this:

1) Use Born rule instead of straycat's rule. But then we have to deal with the argument that the physical measure of the number of worlds in which the observer believes that Born's rule is WRONG is nonzero - in fact, it can be manipulated into being pretty big! And this leads us into postulating things like "consciousness," which we KNOW deep down will get us nowhere! C'mon, you know this!

2) Use straycat's rule, but consider the possibility that we did not determine I correctly. Suppose that we prepared our spin-1 particle such that the probabilities of +1, 0, and -1 are, respectively, for example, 1/6, 2/6, 3/6. Then we could say that I = 6, and after a single measurement, we have 6 worlds, one/two/three of which correspond to the observation of +1/0/-1. Note that this is well-defined because, by the definition of physical measure, these six "worlds" correspond to six *distinct* physical observer-states.

Option #2, of course, has some big unanswered questions, especially: how do we represent the "physical state" of the observer, and how do we calculate the number of distinct physical states that it can evolve into --that is, how do we calculate I? There are probably lots of schemes that could be devised and tested to do this. The advantage of option #2 over option #1, though, is that it leaves the door open for some *genuine* theorizing, as opposed to leading us down the path toward some metaphysical theory involving consciousness. Unless, of course, a metaphysical theory is what we truly seek, deep down?

So to sum up, I seek a scheme such that the *physical measure* of the number of worlds such that the observer determines that straycat's rule is false *approaches zero* in the limit of a large number of measurements. Compare this to the existing situation in the MWI, in which the physical measure of the number of worlds that contain an observer who concludes that Born's rule is false does NOT approach zero.

David
 
  • #168
straycat said:
Option #2, of course, has some big unanswered questions, especially: how do we represent the "physical state" of the observer, and how do we calculate the number of distinct physical states that it can evolve into --that is, how do we calculate I? There are probably lots of schemes that could be devised and tested to do this.

Let me just point out one idea on how to start, which is based in classical mechanics. Suppose we have a system |O> that is in some classically well-defined state. We calculate its time-dependent evolution using the laws of motion. If we are using Newton's laws of motion, for example, and the initial state is genuinely well-defined, then we know that there is only one unique time-dependent evolution for |O>. So we must have I = 1.

But it turns out that such is not the case in general relativity! That is, it is possible to define a system that starts out in a *single, well-defined* state |O>, such that there is *more than one* valid solution to its evolution in time. The situation I'm thinking of is a paper [1] by Kip Thorne investigating the trajectory of a billiard ball. He found that if he allowed his manifolds to be non simply-connected, he could find *more than one* trajectory of the billiard ball, such that each *individual* trajectory is one valid solution to the equations of motion. So in this case, we can have I > 1!

My point here is that my "option #2" above does in fact have room for development. We could represent the state of the observer using nothing other than the classical framework of GR, and as long as we admit multiply-connected manifolds, then there is the possibility that a single well-defined state can have I > 1 distinct "options" for its future evolution. (And by "straycat's rule," each option is "equiprobable.")

And we don't need to postulate "consciousness."

David

[1] Echeverria, Klinkhammer, Thorne. Billiard balls in wormhole spacetimes with closed timelike curves: Classical theory. Physical Review D. Vol 44, no 4. 15 aug 1991. pp 1077 - 1099.
 
  • #169
straycat said:
Suppose that we have a system in a state |x> which we know from experience can evolve, under set experimental conditions, into one of I states, |x_i>, with i being an integer in [1, 2, ..., I]. For example, a spin-1 particle, when put through a SG apparatus, can evolve into one of three states: +1, 0, or -1 (so we have I = 3). We want to know: what is the "probability" associated with each of these three outcomes?

What you seem to miss (or what I'm not getting) is: WHY should we even talk about probabilities in the first place ? After all, unitary quantum theory just says that our spin-1 particle, after going through the SG apparatus, is now in a state of 3 superposed positions, and everything "looking at its position" is simply in an entangled state with the position states of the atom.
This is one single quantum state:

a|mybrain+>|myeye+>|detector+>|atompos+>|atomspin+> +
b|mybrain0>|myeye0>|detector0>|atompos0>|atomspin0> +
c|mybrain->|myeye->|detector->|atompos->|atomspin->

This is what unitary quantum theory tells us. So what should make us "split this in multiple worlds with multiple probabilities" ?
Why suddenly should we consider "mybrain+>" as some different (?) observer as "mybrain-" ? What makes us now say that "mybrain+" observed the state of "myeye+" ? I can simply say that the physical structure which is mybrain is simply entangled with other physical structures, and I can in fact not draw any conclusion about any probabilistic "observation", no ?
So, SOMETHING must somehow have a property that it can only occur in a product state with the rest of the universe, because otherwise - as far as I understand - there is no indication at all why we should observe a probabilitic world in which only one branch "seems to be realized", no ?

The way I solve it (I am aware that it is a "shortcut" !) is to say that somehow there is something, a token, a "marble", which I call "consciousness" which can be associated with certain (one single ? solipsist ; all? back to animism :-) physical structures, but only with one single state which occurs in product form with the rest of the universe. So when entanglement occurs, it has to choose which term to pick, in a probabilistic way. It is this choice, and this probability, which determines the entire probabilistic structure of "observations".
I don't see how you can, without postulating such a "token" or "choice mechanism", go from the entangled state to a conclusion about probabilistic observations.
You also do that: you rewrite your entangled state in several terms (all with equal Hilbert norm), and then you DISTRIBUTE "observers" over their states, you being one of them. But what makes you think, in the first place, that different "observers" have to be distributed over these different terms ? Why cannot you happily assume the purely physical superposition of the wavefunction terms ? Why are different terms corresponding to DIFFERENT observers, which then, by themselves, have "different histories" and can calculate different observation probabilities ? Why cannot one "observer" just "observe" its entangled state ?

This step, from the wavefunction as a sum of terms, to picking ONE term and claiming it has something to do with the observations of an observer, is an extra postulate, and in doing so, you HAVE assigned "observer status" to certain physical structures. Mind you, that's EXACTLY what I do too :-) Only, I claim that this mechanism has somehow to be postulated OUTSIDE OF UNITARY QM. You should be aware of it. I don't see how you can do otherwise.

Once you ARE aware of it, that you need to assign "observer status" (I call it: give them a consciousness) to certain states of physical structures (at least to one structure, or even to all, if you want to), I don't see the difficulty in assigning directly the Born rule to the observations by that observer: you don't need to try to postulate other tricks from which you can then extract the Born rule: indeed, you ARE anyway postulating things, so go directly to the result you need.
You then also see that it is impossible, for an observer, to find out if another physical structure is an observer or not (has a consciousness or not, that's a well-known philosophical problem :-) Do electrons "observe" ? :-) Ok, for electrons, it is a bit hard because they don't have much memory space :-)

If you limit yourself to only ONE observer (one physical structure, which is associated with a token that chooses probabilistically, according to the Born rule, which term to pick in the wave function - just as well saying that it is YOU), you can then just as well go back to good old von Neumann formalism, using the projection postulate, where there is only ONE measurement apparatus in the universe, namely you. (well, me !).

cheers,
Patrick.
 
  • #170
vanesch said:
What you seem to miss (or what I'm not getting) is: WHY should we even talk about probabilities in the first place ?

Well, the way I defined it in my previous post, probability is simply an observable that is specific to a given branch, in the same way that the result of a spin measurement is an observable, the result of which is specific to a given branch. IOW, if you do N measurements on N identically prepared particles, then the sequence of N results is an observed quantity; likewise, the resulting probability is also an observed quantity.

In the scheme I have been promoting, the next step is to talk about the "measure" of worlds in which such-and-such a rule (Born rule, straycat's rule) is true. I would like to say that if the measure of worlds in which such-and-such rule is false is zero, then we can just ignore them.

But the fact remains that even if we follow my scheme (let's imagine I could make straycat's rule actually work), then there will still exist worlds in which "straycat's rule" is false. And the question is: why do we (or I should say, "I") exist in one of those worlds in which it works? I want to argue that it is because the "number of worlds" in which it works is way more than the "number" in which it does not.

But if I want to talk about the "number of worlds," then I need to define a measure. I prefer to use the "physical measure" because that just seems more "natural" to me. But can I really make a rigorous justification for this? As much as I'd like to, I actually DON'T have a rigorous argument for this! The best I have is that "it seems more natural."

So I completely agree with you that ANY assignment of a "probability measure" to each branch constitutes an "extra assumption." I submit that it would be worthwhile to explore what classes of measures *other than* Born's rule might actually be able to be fit into an actual theory that fits actual experience. Maybe Born's rule will turn out to belong to some narrow class of measures that are "un beautiful," and some other class of measures will turn out to possesses some kind of symmetry that is appealing. I don't know, I'm just rambling here.

As you said earlier, Everett made an extra assumption:
vanesch said:
But again, there IS an extra assumption, which you point out: that the probability measure is only function of the coefficient ; this is a property called non-contextuality.

What I want to do is replace it with a different assumption:
straycat said:
I claim that *our goal* is to define m_i in such a way that each individual observer, at the end of a large number of measurements, will conclude that "straycat's rule" is correct: that is, that the predicted value m_i equals the observed value p_i. Actually, I can't say *each* observer. What I really want is for the **physical measure** of observers who conclude that straycat's rule is false to approach zero in the limit of a large number of measurements.

Why do we want this? Essentially, I am *assuming* the "physical measure" as the probability meaure.

How might we argue that anyone definition of measure is "better" than ony other? I don't know!

It occurs to me that the biggest difference between the Born rule and straycat's rule could be summed up like this: Born assumes that m is a function of a, whereas I assume that m is a function of the total number of branches. Might we perhaps argue that only certain types of variables should be "allowed" into the argument? Perhaps a locality criterion, that the argument must represent "locally accessible" information? I would think that the "number of branches" at a given branch point is, in fact, a "local" variable -- that is, if we define a space of observer-states in which the observer "lives." Would the eigenvalues a_i be "local variables"? I don't know - just rambling again.

vanesch said:
Why cannot you happily assume the purely physical superposition of the wavefunction terms ? Why are different terms corresponding to DIFFERENT observers, which then, by themselves, have "different histories" and can calculate different observation probabilities ? Why cannot one "observer" just "observe" its entangled state ?

Well, you *can* happily assume the superposition! As Everett writes in his paper: "there is no such transition, nor is such a transition necessary for the theory to be in accord with our experience. ... It is unnecessary to suppose that all but one [world] are somehow destroyed ..."

vanesch said:
Why cannot one "observer" just "observe" its entangled state ?

Everett again: "Arguments that the world picture presented by this theory is contradicted by our experience, because we are unaware of any branching process, are like the criticism of the Copernican theory that the mobility of the Earth as a real physical fact is incompatible with the common sense interpretation of nature because we feel no such motion. In both cases the argument fails when it is shown that the theory itself predicts that our experience will be what it in fact is."

I think that you know this argument, though. In fact, I think I remember that you made this argument in your discussion with Travis. (?) To me, the question is not: does it make sense to talk about "observers"? Rather, the question is: how do we justify ignoring the observers who conclude that our Theories of Nature are just plain wrong? What we are trying to do is to define a measure such that the measure of such observers approaches zero. Thus, the question is: how do we justify one measure over another? It would be nice to derive it in some way. But if we can't, then we accept the status quo: the adoption of a measure is an independent postulate.

David
 
  • #171
straycat said:
Well, you *can* happily assume the superposition! As Everett writes in his paper: "there is no such transition, nor is such a transition necessary for the theory to be in accord with our experience. ... It is unnecessary to suppose that all but one [world] are somehow destroyed ..."



Everett again: "Arguments that the world picture presented by this theory is contradicted by our experience, because we are unaware of any branching process, are like the criticism of the Copernican theory that the mobility of the Earth as a real physical fact is incompatible with the common sense interpretation of nature because we feel no such motion. In both cases the argument fails when it is shown that the theory itself predicts that our experience will be what it in fact is."

My point was not that I somehow think that our experience contradicts MWI ; although one can understand why Everett needed to defend himself here when he wrote that down.

My point was that, taking for granted (as I do) that there is strict unitary evolution, that there is still NOTHING in the entire postulate system that relates this "statefunction of the universe" to any actual "observation", and that you need to say, somehow, that an observation is somehow related to ONE term. That is far from evident, a priori, and the comparison with Copernic is some nice rhetoric, but misses ground: in classical mechanics you CAN calculate the accelerations that an individual on the surface of the Earth will experience. But simply given your wave function, somehow you NEED TO MAKE AN EXTRA STATEMENT that what you are going to observe, as an observer, will have to do something only with ONE TERM in the Schmidt decomposition of the physical states of the observer and the rest. I don't see how you can somehow DEDUCE this. You could just as well make a statement that the observer will, say, always find the average of the values associated to 3 terms, no ? So that "an observer" corresponds to some random choice of 3 terms in the Schmidt decomposition (I'm just making this up here). Or to all of them. If you measure A, you always measure its expectation value, say. As I'm making these rules up when I'm writing, they will probably contain elemetary errors, but it is to illustrate the fact that the very choice of a single term is something that is an extra assumption. And THEN, there are further extra assumptions, we've been talking about, of how to assign probabilities. But the very first assumption is that somehow, ONE term has to be picked out. This, to me, is far from evident when you only have the unitary part.
But I'm not fighting it, so Everett's defense doesn't address my point. I'm simply saying that in order to do so, you need an extra assumption.

cheers,
Patrick.
 
  • #172
I am here resurrecting a very old thread in which vanesch and I discussed the issue of locality at great length. I was recently skimming through it, and noticed that (in what seemed at the time to be an important development) vanesch actually made a significant error that I didn't catch at the time. If vanesch is still around, maybe he'd like to comment. But I at least wanted to set the record straight.

vanesch said:
My claim is that causality only has a meaning as "information transfer". This can be "internal information transfer" also, even if we cannot perform real experiments in the lab because the internal quantity we're talking about is not directly accessible (such as a hidden variable) ; but one thing is necessary to be able to send information, and that is making free choices at the sending end. Upon my decision of acting at A, if something happens at B or not determines if there is information transfer and hence a causal link.
Some semantics: my "choice at A" _causes_ "an effect at B". In order to cause something, I have to have a choice in causing it, otherwise I just see it as a "description of what is happening" and not of "what causes what".
Let us call this view on causality "information - causality".
From "information - causality" follows then naturally "information - locality".
I told you why I think that is the right definition, it comes from a paradox you can obtain in SR if you don't stick to it.

You could also define a "correlation causality" and it leads to "Bell Locality".
"Correlation causality" states that you can only have statistical correlations if there is a direct dependence of the result at A on the result at B (in a statistical sense) or if they have both a common origin (state L). Bell Locality is the mathematical expression of this causality if we assume that the direct influence cannot take place ("locality"), that the only link between the two factors is through L (common cause).

But I don't see any requirement in special relativity to require Bell locality.

I will now try to find the link between "information locality" (required by SR) and Bell Locality (required by, eh, what ? We'll see :-).

My second claim is that "Bell Locality" is the above notion, applied to an underlying deterministic model ; that the notion that a "correlation implies a direct causal link or an indirect common cause link" finds its origin in a deterministic underlying model.
I think it is THIS point which is hard to get by, because THIS is the real paradigm shift needed to let go determinism. And I think it was this paradigm shift that Bell couldn't conceive, namely that you could have correlations which were NOT implying a direct causal link or an indirect common cause link.
I don't know what I can do more than reiterate Patrick's theorem :smile:
"Any stochastic theory satisfying Bell locality leads to a deterministic theory satisfying Bell locality".
I think it is a small step to show:
"From Bell locality follows information locality."

Indeed, the factorized form of P(A,B ; a,b) = P(A ; a) x P(B ; b) means that the choice of a cannot influence the probability of B.


I guess what I still should try to prove is that from information-local determinism follows Bell locality.

So now we have, from determinism, that P(A,B ; a,b,K) equals 1 or 0 ; so do the individual probabilities P(A ; a, b, K) and P(B ; a, b, K) ;
and from information locality follows that P(A ; a, K) and P(B ; b, K) do not depend on the "other" choices b and a respectively.

This means, in fact, that A = A(a) and B = B(b): for a given value (choice) of a, there is ONE A value that is the outcome, with certainty ; all other A values have probability 0.

So P(A(a), B(b) ; a,b,K) = 1 = P(A(a) ; a,K) x P(B(b) ; b,K)

So at least for the P=1 values, we can write the product form.
But this is also true for the P=0 values, because if A1 != A(a) OR B1 != B(b), then P(A1, B(b) ; a,b,K) = 0 = P(A1 ; a,K) x P(B(b) ; b, K) (namely 0 x 1)
and idem for the two other cases A(a), B1 and A1,B1.

So we have that determinism and information-locality leads to Bell locality.
So I came to a full circle:

(1)From Bell locality follows Bell local determinism. (Patrick's theorem)
(2) From Bell locality follows information locality
(3) From information locality and determinism follows Bell locality

Together:

BELL LOCALITY <===> information locality and determinism

QED

cheers,
Patrick.

OK, first some terminology. What Patrick is here calling "information locality" is this mathematical condition on probabilities:

P(A|a,b,L) = P(A|a,L)

That is, the probability for a given outcome on one side doesn't depend on the distant setting b. And of course vice versa: P(B|a,b,L) = P(B|b,L).

In the literature, this condition is usually called "parameter independence" or PI for short. (That is Shimony's name for it. Jarrett, who introduced the idea, called it something else.) A similar condition is called "outcome indpendence" or OI for short. This condition says that

P(A|a,b,B,L) = P(A|a,b,L)

and, conversely,

P(B|a,b,A,L) = P(B|a,b,L)

That is, the probabilities for a given outcome on one side (given that we're conditionalizing on both settings) don't depend on the distant outcome.

Now what I noticed about this old "proof" of Patrick's is that he smuggled in OI. He brings in PI explicitly, out in the open. But OI is brought in as well, but not identified as a premise. This happens right at the beginning where he says

So now we have, from determinism, that P(A,B ; a,b,K) equals 1 or 0 ; so do the individual probabilities P(A ; a, b, K) and P(B ; a, b, K) ;
and from information locality follows that P(A ; a, K) and P(B ; b, K) do not depend on the "other" choices b and a respectively.

You see, the "individual probabilities" should have been written initially as
P(A|a,b,B,K) and P(B|a,b,A,K). (He uses "K" instead of "L" to denote the complete specification of the state of the pair.) By simply omitting the in-principle-possible dependence on the distant outcomes (B for A and vice versa), Patrick tacitly assumes outcome independence (OI).

The rest of the proof then amounts to nothing but showing that applying parameter independence (PI) leads back to Bell's Locality condition. But it is a well-known and obvious fact that Bell Locality is equivalent to the conjunction of OI and PI. For, by OI

P(A|a,b,B,L) = P(A|a,b,L)

and then by PI

P(A|a,b,L) = P(A|a,L)

so that, using both of them, we have P(A|a,b,B,L) = P(A|a,L). And that is precisely Bell Locality.

So what? This shows that it is simply not the case that, as Patrick claimed, Bell Locality is equivalent to "signal locality" (which remember is his name for PI) and determinism. Rather, Bell Locality is equivalent to PI and OI and determinism. But that is a really rather pointless conclusion, given that Bell Locality is also equivalent to PI and OI (without determinism). So really all that was shown here is two unrelated things:

1. Any time you have a stochastic theory, it's possible to introduce hidden variables and make it into a deterministic theory with the same predictions. (This is true for any theory, whether it violates Bell Locality or not. A Bell Nonlocal stochastic theory can be made into a Bell Nonlocal deterministic theory by adding hidden variables. A Bell Local stochastic theory can be made into a Bell Local deterministic theory by adding hidden variables.)

2. Bell Locality is equivalent to the conjunction of OI and PI.

These two distinct points are quite different from Patrick's conclusion from all of this -- namely, that what Bell *really* cared about was determinism, and that all is well (for the consistence of QM and SR) if you merely let go of the attachment to determinism. That just ain't so. Determinism really has nothing to do with it (given that any theory that isn't deterministic can be made into one that is by adding hv's).

The real question is whether orthodox QM (which violates Bell Locality and isn't deterministic) can be made into a theory that respects Bell Locality by adding hidden variables. Whether that new theory is deterministic or not is completely irrelevant. Of course, if you can do it at all, then you can do it with a deterministic theory (because any non-deterministic theory can be made into a deterministic one by adding more hv's). But that simply isn't the important issue here. The important thing is Bell Locality: can QM be replaced by something which actually respects Bell Locality? The answer is no (as Bell's Theorem proves) -- at least, not if the QM predictions are correct (and experiment suggests they are).

I hope this clarifies some things, or at least makes people realize they may have concluded the wrong thing way back when...
 
  • #173
ttn said:
I am here resurrecting a very old thread in which vanesch and I discussed the issue of locality at great length. I was recently skimming through it, and noticed that (in what seemed at the time to be an important development) vanesch actually made a significant error that I didn't catch at the time. If vanesch is still around, maybe he'd like to comment. But I at least wanted to set the record straight.

Vanesch is still around, but was on a holliday with not much internet access (dial up at my mother in law's ) :smile:

I noticed this so-called "outcome independence" already before in some of the posts on Bell's stuff, and I think it is an abuse of probability theory, so I agree with what you write, but I don't consider it an error on my part, because "outcome independence" is something that is BUILD INTO KOLMOGOROV PROBABILITY.




OK, first some terminology. What Patrick is here calling "information locality" is this mathematical condition on probabilities:

P(A|a,b,L) = P(A|a,L)

That is, the probability for a given outcome on one side doesn't depend on the distant setting b. And of course vice versa: P(B|a,b,L) = P(B|b,L).

We agree fully here ; well, except for a nitpicking detail in notation which will turn out to be crucial:
I wrote: P(A ; a,b,L) etcetra, and that was not because I have a defective keyboard ; it is because I meant: a Kolmogorov probability measure, which is PARAMETRISED by a, b and L, and of which I mean the probability measure of A (with these parameters). It is a bit as if I wrote: f(x ; a) = x^(a-th prime number), and I say that f(x ; a) is a polynomial. It is in fact a family of polynomials, and a picks out the polynomial ; but clearly f(x,a) as a function of 2 variables x and a is NOT a polynomial, or an analytic function ! It's not even defined for non-integer a.

That's why I wrote the ; and not a |, because | has a specific meaning within a Kolmogorov probability measure: conditional probability.
So I'd write: P(B|A ; a,b,L) which means the conditional probability measure of B on condition A, for the parametrised probability measure with parameters a,b, and L, and is equal (by definition) to the ratio of two measures:
the one of the section of A and B, and of A.

In the literature, this condition is usually called "parameter independence" or PI for short. (That is Shimony's name for it. Jarrett, who introduced the idea, called it something else.) A similar condition is called "outcome indpendence" or OI for short. This condition says that

P(A|a,b,B,L) = P(A|a,b,L)

and, conversely,

P(B|a,b,A,L) = P(B|a,b,L)

That is, the probabilities for a given outcome on one side (given that we're conditionalizing on both settings) don't depend on the distant outcome.

But as I was not talking about conditional probabilities, but only of probabilities (measures) of A and B, it does not make sense, in a Kolmogorov probability system (which we have, once we have fixed a,b and L), to do so. Let us fix for the moment the family. Once a, b and L are fixed, our Kolmogorov measure is fixed. Now, within this measure, it can, or cannot be true that P(A) = P(A|B), of course. I don't see where I did use that. But P(A) has a perfectly well-defined meaning, and so does P(A|B).
Moreover, in a DETERMINISTIC theory, there are no other probabilities but 1 and 0, for ALL possible measurable sets. I think that's all I needed.
Once we have fixed the probability measure (by fixing a,b and L: choosing the probability distribution amongst the family), then P(X) will be an element of {0,1}. I think that's what is meant by determinism, no ?
So P(A) will be 0 or 1 (depending on the choice of a, b and L) ; and so will P(A|B) if it is defined (if P(B) is not 0).

You see, the "individual probabilities" should have been written initially as
P(A|a,b,B,K) and P(B|a,b,A,K). (He uses "K" instead of "L" to denote the complete specification of the state of the pair.) By simply omitting the in-principle-possible dependence on the distant outcomes (B for A and vice versa), Patrick tacitly assumes outcome independence (OI).

This is correct, but it is part of the definition of a probability measure. I'm not talking about conditional probabilities, I'm just talking about the probability measures of A and of B, once the measure is fixed (by fixing a,b and L).

The rest of the proof then amounts to nothing but showing that applying parameter independence (PI) leads back to Bell's Locality condition. But it is a well-known and obvious fact that Bell Locality is equivalent to the conjunction of OI and PI.

That simply means then that my theorem was not my original work
But I think nowhere I needed explicitly to assume that the conditional probability P(B|A) = P(B). I just work with P(A) and with P(B) and with P(A,B) which is the measure of the section of A and B. These are 3 measures which come out of the Kolmogorov probability which is fixed once we have fixed a,b and L. And once we assume this distribution to be DETERMINISTIC, this means that these three numbers cannot be anything else but a 0 or a 1.
Now, information locality means that the probability of A at Alice does not depend on what I (Bob) do with my choice of b. It hasn't gotten anything to do with what I got as an outcome because Alice doesn't know that. So information locality really means that P(A) (the only thing Alice can learn) is not dependent on what I can choose (the parameter b). It hasn't gotten anything to do with a conditional probability P(A|B) because Alice doesn't care what I measure, and I cannot INFLUENCE it. I can only influence the parameter b, to send a message to Alice. If I'm not supposed to send a message to Alice, it is THIS probability (P(A)) which should be independent of my choice, and not P(A|B) - which Alice doesn't know about anyways.

Now, given the fact that the distribution (for a given choice of a, b and L) is deterministic, we have then the following possibilities for the measure with a given a, b and L, for each thinkable measurable set A and B:

P(A) = 1 ; P(B) = 1
P(A) = 0 ; P(B) = 1
P(A) = 1 ; P(B) = 0
P(A) = 0 ; P(B) = 0

Normally from the individual probability measures of A and B, we cannot determine the measure of the section A and B, but in this degenerate case we can of course, and we have respectively:
P(A,B) = 1
P(A,B) = 0
P(A,B) = 0
P(A,B) = 0

Which can be written, in a trivial way, in the product form P(A) x P(B) in each case, so P(A,B) = P(A) x P(B) ; no matter what A and what B.

And that's all there was to show.
It is thanks to determinism that we got these degenerate probabilities which allowed us to infer the measure of the section A and B. Nowhere I needed conditional probabilities and hence made no hypothesis about it.

cheers,
Patrick.
 
  • #174
vanesch said:
I noticed this so-called "outcome independence" already before in some of the posts on Bell's stuff, and I think it is an abuse of probability theory, so I agree with what you write, but I don't consider it an error on my part, because "outcome independence" is something that is BUILD INTO KOLMOGOROV PROBABILITY.

I don't really understand this. I just don't know any details about formal Kolmogorov probability theory. In what way are the variables one "conditions on" there (I gather that's technically the wrong word, but I don't know what the right one is) different from regular variables in regular conditional probabilities?

And how can it be that outcome independence is somehow built into the axioms of probability theory? What does this mean for OQM since that theory violates OI?


That simply means then that my theorem was not my original work

You shouldn't be too upset. The whole scheme of analyzing Bell Locality into Outcome Independence and Parameter Independence was torn to shreds by Maudlin.


But I think nowhere I needed explicitly to assume that the conditional probability P(B|A) = P(B).

I don't know now. You'll have to explain the difference between conditionalizing on a variable and regarding it as a parameter or whatever for
Kolmogorov.

But as far as I know, Bell Locality is still the condition that

P(A|a,b,B,L) = P(A|a,L).






I just work with P(A) and with P(B) and with P(A,B) which is the measure of the section of A and B. These are 3 measures which come out of the Kolmogorov probability which is fixed once we have fixed a,b and L.

Just to repeat my request above, can you clarify how this applies to orthodox QM? Because sure in OQM, we don't have

P(A,B;a,b,L) = P(A;a,b,L) * P(B;a,b,L).

Right? Somehow you've got to "conditionalize" (or whatever) one of the two factors on the right on the other outcome (just like Bayes' rule requires). You seem to be saying that there is no need or ability to do this, yet OQM requires it... :frown:



And once we assume this distribution to be DETERMINISTIC, this means that these three numbers cannot be anything else but a 0 or a 1.
Now, information locality means that the probability of A at Alice does not depend on what I (Bob) do with my choice of b.

I hate to make a fuss over terminology, but could you use the technical term "parameter independence" if that's what you mean? Or "signal locality" if that's what you mean? (And btw, these are not the same. Violating signal locality requires parameter-dependence *and* a sufficient control over the prepared initial state of the system.)

It hasn't gotten anything to do with what I got as an outcome because Alice doesn't know that.

That's right. I mean, that's why OQM is consistent with signal locality. Bob can't send a signal to Alice by making a measurement, because his measurement collapses his particle to some definite but unpredictable state. This causes Alice's particle also to collapse to some definite state, but how that relates to what Bob got is unknown to her. So she can't learn what he did by measuring something on her particle. In other words, the randomness associated with the collapse masks the non-locality of the collapse. OQM violates Bell Locality, but it doesn't permit superluminal signalling.


So information locality really means that P(A) (the only thing Alice can learn) is not dependent on what I can choose (the parameter b). It hasn't gotten anything to do with a conditional probability P(A|B) because Alice doesn't care what I measure, and I cannot INFLUENCE it. I can only influence the parameter b, to send a message to Alice. If I'm not supposed to send a message to Alice, it is THIS probability (P(A)) which should be independent of my choice, and not P(A|B) - which Alice doesn't know about anyways.

Here you're sliding back and forth between "signal locality" and "what relativity requires." Remember, Bohmian Mechanics is also consistent with signal locality, yet somehow you (and most others) think that this theory is inconsistent with relativity. No double standards.


Normally from the individual probability measures of A and B, we cannot determine the measure of the section A and B, but in this degenerate case we can of course, and we have respectively:
P(A,B) = 1
P(A,B) = 0
P(A,B) = 0
P(A,B) = 0

Which can be written, in a trivial way, in the product form P(A) x P(B) in each case, so P(A,B) = P(A) x P(B) ; no matter what A and what B.

And that's all there was to show.
It is thanks to determinism that we got these degenerate probabilities which allowed us to infer the measure of the section A and B.

I still don't understand what you think this proves. Is it: that a deterministic theory automatically respects "outcome independence"? I suppose that's true, especially if you *define* determinism in terms of

P(A|a,b,L)

and

P(B|a,b,L)

equalling either 0 or 1. But then, what's actually relevant is not that those probabilities equal {0,1}, but simply that you've written them without any "outcome dependence"! And obviously a theory with no outcome dependence will respect OI. But that has nothing to do with whether it's deterministic.

Nowhere I needed conditional probabilities and hence made no hypothesis about it.

As far as I can tell, this is true by fiat only. You define "determinism" in a way that precludes outcome dependence from the very beginning. But this is misleading and unnecessary, since we know that Bell Locality = OI and PI *regardless* of whether or not we have also determinism.
 
  • #175
ttn, excuse me for breaking in, but I read this post pretty carefully, and I see you making a distinction between "signal locality":

Bob can't send a signal to Alice by making a measurement, because his measurement collapses his particle to some definite but unpredictable state. This causes Alice's particle also to collapse to some definite state, but how that relates to what Bob got is unknown to her. So she can't learn what he did by measuring something on her particle. In other words, the randomness associated with the collapse masks the non-locality of the collapse. OQM violates Bell Locality, but it doesn't permit superluminal signalling.

and "what relativity requires". But Einstein developed special relativity by considering observers (who might as well be called Alice and Bob) comparing their measurements in different inertial frames via signals limited to the speed c. So if QM obeys signal locality, why doesn't it satisfy what relativity requires?
 
  • #176
ttn said:
I don't really understand this. I just don't know any details about formal Kolmogorov probability theory. In what way are the variables one "conditions on" there (I gather that's technically the wrong word, but I don't know what the right one is) different from regular variables in regular conditional probabilities?

Ok, I think this is crucial to all that follows. Maybe I got too much of a mouthful with "Kolmogorov" ; it is just standard probability theory. From the top of my head - correct me if I'm wrong - a probability measure according to Kolmogorov P is a mapping from a subset M of the power set of Omega into the interval of real numbers [0,1] such that:

P(Omega) = 1
P(empty set) = 0
P(A U B) = P(A) + P(B) if A and B disjoint

and some other, more subtle, properties making P into a measure,
see http://en.wikipedia.org/wiki/Kolmogorov_axioms

A, an element of M (and thus a subset of Omega), is called an "event", and P(A) is "the probability for the event A to happen".

For a finite set of elements Omega, M can be set equal to the powerset (the set of all subsets) of Omega.
These axioms define a standard probability distribution. Of course, for a given set Omega, there can be MANY DIFFERENT PROBABILITY DISTRIBUTIONS, and we can label some of them, with a PARAMETER SET a,b or L. But there is a difference between looking at different sets within one probability distribution, and looking at the probability of a set for different values of the parameter set, and that's the entire difference I tried to explain between the usage of | (which is WITHIN a single probability distribution), and the usage of ; which refers to swapping between different probability distributions.
As I said, in all considerations of "causality" and "locality" and "determinism" and so on, one has to ASSUME FREE CHOICE somehow, and depending on this free choice, we CHANGE THE PROBABILITY DISTRIBUTION. So all what depends on our free choice goes into parameters that tell us which probability distribution we are going to use. The free choice is the setting of Bob and Alice's analysers: they can decide that freely, and as a function of the choice they make, we have different probability distributions of how things will happen. ALL things that will happen. There is also an extra parameter included, which is the COMMON cause, L, and which can be seen as a free choice of some unknown individual (a little devil, if you want). It fixes the entire probability distribution.
However, an OUTCOME is not something that FIXES the probability distribution, it is part of what is described by that distribution. So it doesn't enter into any parameter list !

And how can it be that outcome independence is somehow built into the axioms of probability theory? What does this mean for OQM since that theory violates OI?

Ok, I formulated this badly, sorry. OI is not something that is "build into the axioms of probability theory", it is rather something that is well-defined within probability theory, but which I don't NEED. What I meant was, that P(A) and P(A|B) are two well defined quantities, meaning we can talk about P(A) without having to say that "it depends also on B or not".
In fact, P(A|B) is nothing else but P(A sect B) / P(B) ; so it is a derived concept. Saying that P(A|B) = P(A) just comes down to saying that
P(A sect B) = P(A) x P(B). We usually write P(A sect B) as P(A,B).
It makes perfectly sense to talk about P(A) and about P(A,B). These are numbers which are well defined if the probability distribution is well defined (meaning, the parameters which select the distribution from its family are fixed, in our case a, b and L).

I don't know now. You'll have to explain the difference between conditionalizing on a variable and regarding it as a parameter or whatever for
Kolmogorov.

As I said, the parameters select ONE of different probability distributions out of a family. Once we have our distribution, we can apply it to M. A conditional probability within this distribution is then nothing else but a shorthand for a fraction of two measures of this distribution.
You cannot write P(A ; a) = P(A and a) / P(a) if a is a parameter. You can however, write very well P(A |B ) = P(A sect B) / P(B) ; it is its very definition.

But as far as I know, Bell Locality is still the condition that

P(A|a,b,B,L) = P(A|a,L).

Which is to be re-written:
P(A|B ; a,b,L) = P(A ; a,L)

By definition, we have: P(A|B ; a,b,L) = P(A,B ; a,b,L) /P(B ; a,b,L)

Now, if we rewrite my "Bell condition" which is:
P(A,B ; a,b,L) = P(A ; a,L) x P(B ; b,L), together with the fact that P(B ; a,b,L) = P(B ; b,L) (does not depend on parameter a - that's information locality to me), and we fill it into the definition of P(A|B ; a,b,L) above,
we have:

P(A|B ; a,b,L) = P(A ; a,L) x P(B ; b,L) / P(B ; b,L) = P(A ; a,L) and we're home:

both statements are equivalent.


Just to repeat my request above, can you clarify how this applies to orthodox QM? Because sure in OQM, we don't have

P(A,B;a,b,L) = P(A;a,b,L) * P(B;a,b,L).

Right? Somehow you've got to "conditionalize" (or whatever) one of the two factors on the right on the other outcome (just like Bayes' rule requires). You seem to be saying that there is no need or ability to do this, yet OQM requires it... :frown:

I'm sorry that I misformulated this: I didn't mean to imply that in just any Kolmogorov system you have to have this factorisation of course ! What I meant to say (and I badly expressed myself) was:
P(A ; a,b,L) is perfectly well defined. You do not have to say that the expression is somehow "incomplete" because I didn't include B in the list to the right. I could have been talking about ANOTHER quantity P(A|B ; a,b,L) ; only, I didn't talk about it, I didn't need it, because I only wanted to demonstrate P(A,B | a,b,L) = P(A ; a,L) x P(B ; b,L).
That's a perfectly sensible statement, and the three quantities are well defined in just any Kolmogorov system (the equality, of course, is not always true and has to be demonstrated for the case at hand).
I could also talk about things like P(A|B ; a,b,L) and so on, but I simply didn't need to. It is not an ERROR to talk about P(A | a,b,L) and in doing so I do not make any assumption. That's what I put badly as "it is build into the axioms of probability theory".

I hate to make a fuss over terminology, but could you use the technical term "parameter independence" if that's what you mean? Or "signal locality" if that's what you mean? (And btw, these are not the same. Violating signal locality requires parameter-dependence *and* a sufficient control over the prepared initial state of the system.)

That is correct. There could be of course a conspiracy that L compensates for every change in a that I make. I assume of course same L.

Here you're sliding back and forth between "signal locality" and "what relativity requires." Remember, Bohmian Mechanics is also consistent with signal locality, yet somehow you (and most others) think that this theory is inconsistent with relativity. No double standards.

Ok, we've had this discussion already a few times. Because the statistical predictions of both theories are identical, there's no discrimination between both on those "black box" outcomes of course. It is a matter of esthetics of the inner workings. If you need to write that the state HERE is directly a function of the state (or its rate of change) THERE, in the equations, then this thing is not considered local, even if what you crank out of it doesn't see the difference. Sometimes this can be an artefact. For instance, the principle of minimum action is certainly not something local: you need to integrate over vastly remote pieces of space just to find out what you will do here. So that theory is a priori non-local. If you can rewrite it as a differential equation (Euler-Lagrange) then it has become local. But the result is the same.

I still don't understand what you think this proves. Is it: that a deterministic theory automatically respects "outcome independence"? I suppose that's true, especially if you *define* determinism in terms of

P(A|a,b,L)

and

P(B|a,b,L)

equalling either 0 or 1. But then, what's actually relevant is not that those probabilities equal {0,1}, but simply that you've written them without any "outcome dependence"! And obviously a theory with no outcome dependence will respect OI. But that has nothing to do with whether it's deterministic.

Again, I don't care about "outcome independence". I didn't need conditional probabilities at all. I needed to SHOW that P(A,B) factorizes into P(A) x P(B). This can be rewritten into something that uses outcome independence if you like, but I don't care.
What I wanted to show was that from determinism (all probabilities are 1 or 0), and from information locality (P(A;a,b,L) = P(A ; a,L) and P(B ; a,b,L) = P(B;b,L) ) follows the factorization statement that is Bell locality:
P(A,B ; a,b,L) = P(A ; a,L) x P(B ; b,L).

In all this, I never used a conditional probability (and hence didn't need to say "outcome independence"). I used a property of the parametrisation of the family of distributions (namely, that all distributions with same b and L give the same probabilities for events B, no matter what a is ; this comes down to saying that my free choice of a has no influence on the probabilities of events at Bob's) ; and I used a property of each individual distribution (namely determinism, so that all results of mappings P is 1 or 0).
From that, I derived P(A,B ; a,b,L) = P(A ; a,L) x P(B ; b,L).

That's sufficient. I can now of course bring one right hand side member to the left, and write:
P(A,B ; a,b,L) / P(B ; b,L) = P(A ; a,L)

and use the definition of conditional probability on the left:

P(A|B ; a,b,L) = P(A ; a,L)

and you will be happy because I now derived some "outcome independence" ; but first of all this makes no mathematical sense in the case of deterministic distributions because I can divide by 0 (P(B ; b,L) is often 0), and second, it is only the use of a definition. Mind you, I didn't ASSUME this: I demonstrated it (although by dividing by 0).

As far as I can tell, this is true by fiat only. You define "determinism" in a way that precludes outcome dependence from the very beginning. But this is misleading and unnecessary, since we know that Bell Locality = OI and PI *regardless* of whether or not we have also determinism.

I would really like to know where I USED "outcome independence" and how this is defined. I used only a parametrized set of distributions, which are parametrized by a,b and L (meaning that all my probabilities are fixed when these parameters are fixed) ; then I used different events (subsets of omega), namely A, B and (A sect B), on which I applied my now well defined distribution. I showed that under the conditions I posed, P(A,B) = P(A) x P(B). That's all. Never I needed to use a conditional probability so I don't see where I made such an assumption.

cheers,
Patrick.
 
  • #177
selfAdjoint said:
ttn, excuse me for breaking in, but I read this post pretty carefully, and I see you making a distinction between "signal locality" [...]
and "what relativity requires". But Einstein developed special relativity by considering observers (who might as well be called Alice and Bob) comparing their measurements in different inertial frames via signals limited to the speed c. So if QM obeys signal locality, why doesn't it satisfy what relativity requires?

Oh, yes, that's a good question. Certainly worth clarifying. I certainly don't mean to imply that relativity doesn't require signal locality. It does. Any theory which permits transmission of superluminal signals, contradicts relativity. Period.

The question is: is signal locality *all* that relativity requires? I think it's pretty clear that it's not, or at least it's extremely debatable. Maybe the clearest way to make this point is by example. Take Bohmian Mechanics. This theory is blatantly nonlocal. You have two particles following definite trajectories, trajectories that are "choreographed" by the wave function (according to a deterministic guidance formula). (Hopefully Bohm's theory is familiar enough that that one-liner summary is sufficient.) But the guidance formula is blatantly non-local: the trajectory of one particle depends on the instantaneous position of the distant particle (and hence indirectly on the fields encountered by that distant particle). So for example in the EPR type situation, the two particles fly off toward their respective detectors; one of them enters the detector and veers off in a certain way in response to the magnetic fields inside the detector (you know, veers off toward one or the other of the SG-device's output ports); and this veering causes the distant particle also to veer in a certain way that ensures that if it later encounters magnetic fields oriented the same way, it will emerge from the opposite output port. Or something like that. The point is, Bohm's theory is only able to reproduce the QM correlations because of this non-local mechanism.

And yet Bohmian Mechanics is perfectly consistent with signal locality! So there is this blatantly nonlocal mechanism happening (which probably requires some notion of absolute simultaneity to even be *defined* clearly) and yet it turns out to be impossible to build a superluminal-telephone according to the theory. Surely this suggests that "what relativity requires" is something stronger than merely the condition that you can't build a superluminal-telephone. Someone who believed in relativity and wasn't bothered by the nonlocality in Bohmian Mechanics would, I think we'd have to say, not have too deep an understanding of what relativity actually means.

Does that make sense?

Of course, one can make the same point with orthodox QM, which has two separate dynamical laws: Schroedinger's equation and the collapse postulate. And if you take Bohr's completeness doctrine seriously, the collapse postulate is a blatantly nonlocal mechanism by which something you do in one place can affect the state of the system somewhere else, instantaneously. And yet this theory too is consistent with signal locality. You can't transmit information superluminally using orthodox QM. So this too suggests that signal locality is a necessary, but not a sufficient, condition for consistency with relativity. (BTW, the reason it's harder to convince people of the point using OQM as an example is that there is a pervasive muddle-headedness about the collapse postulate. People seem to want to waffle back and forth on whether the collapse is epistemological or physical depending on whether they're presently defending the locality claim or the completeness claim. See the final section of quant-ph/0404016 for some further discussion and references on that point.)

So then, if signal locality is necessary but not sufficient for "genuine consistency with relativity" what other conditions are needed? Bell proposed "Bell Locality" as a candidate for this. He argues for it very eloquently in a number of his papers. See, for example, "La Nouvelle Cuisine", which is reprinted in the (new, 2nd edition of) "Speakable and Unspeakable". (I think it's the very last chapter in the book, written after the first edition of "Speakable..." came out.) It's a very good read. Highly recommended.

What role does "Bell Locality" play in this whole debate? Well, the obvious thing to say is that Bell Locality is the locality assumption that Bell imposes in the derivation of Bell's inequalities. He assumes you've got a hidden variable theory which satisfies Bell Locality, and then shows that such a theory (regardless of any of the details about what the hidden variables *are*, which is what makes this powerful) must satisfy the inequality. And since QM and experiment both say the inequality is not satisfied, this means that no Bell Local hidden variable theory can be the correct theory. Right?

But that's not the end of the story. If it were, then it would be right to take Bell's Theorem as tolling for the hidden variable program: if hv theories have to violate Bell Locality, that means they conflict with relativity, which just shows we should have believed Bohr all along that hidden variables were wrong, that OQM is already complete. But not so fast, because OQM also violates Bell Locality. (That is essentially the EPR argument.) And that means whether you have hidden variables or not, you're stuck with a violation of Bell Locality. No Bell Local theory can match experiment. So nature violates Bell Locality.

That much is (or at least ought to be) uncontroversial. The question is simply: is Bell's candidate for a stronger locality principle (i.e., his identification of "Bell Locality" with "what relativity really requires") correct? Or is there some intermediate between the obviously-too-weak "signal locality" and the allegedly-too-strong Bell Locality? That's an interesting question. But as far as I know, nobody has proposed any such plausible intermediate.
 
  • #178
vanesch said:
As I said, in all considerations of "causality" and "locality" and "determinism" and so on, one has to ASSUME FREE CHOICE somehow, and depending on this free choice, we CHANGE THE PROBABILITY DISTRIBUTION. So all what depends on our free choice goes into parameters that tell us which probability distribution we are going to use. The free choice is the setting of Bob and Alice's analysers: they can decide that freely, and as a function of the choice they make, we have different probability distributions of how things will happen. ALL things that will happen. There is also an extra parameter included, which is the COMMON cause, L, and which can be seen as a free choice of some unknown individual (a little devil, if you want). It fixes the entire probability distribution.

OK, I'm basically with you here, although I would point out that you can't define "L" (which is supposed to be Bell's "Lambda") twice. Bell defines it as a complete specification of the state of the particles. (It's of course hard if not impossible for us to know precisely what that consists of in general. But the point is: some particular *theory* whose locality you're assessing will *tell* you what a complete state specification consists of. For example, Orthodox QM tells us that the wave function alone provides this complete description.)

But then here you say that L can also be thought of as a freely chosen parameter. Well, maybe, maybe not. Again, this is something we can't just stipulate a priori but, rather, have to find out from a given theory. The theory we're judging will tell us whether it is or isn't possible to *prepare* a system with a specific, desired state. According to OQM, for example, this is possible. But according to Bohmian Mechanics it isn't. So you can't just assume that "L" is one of the freely chosen parameters the way "a" and "b" are.

This is an elementary point, but I'm worried that this is going to mean that (for some theories at least) "L" is an "uncontrollable" that is therefore in the same category as "A" and "B". I mean, isn't that the distinction you're making above? Controllable (ie freely choosable) variables constitute the "parameter set", and the uncontrollables are to be thought of as the "outcomes" -- the things we talk about the probability *of* given the parameter set. But then for a theory in which the state of the particle pair isn't freely choosable, we don't get to put "L" in the parameter set and ... welll... all hell breaks loose.





Ok, I formulated this badly, sorry. OI is not something that is "build into the axioms of probability theory", it is rather something that is well-defined within probability theory, but which I don't NEED. What I meant was, that P(A) and P(A|B) are two well defined quantities, meaning we can talk about P(A) without having to say that "it depends also on B or not".

I don't see that. In fact, you can't just talk about "P(A)" without specifying (using your terminology) the parameter set. Otherwise it's just vague. Do you mean P(A) with this setting or that setting or this state preparation or that state preparation or what?

But then, who are you to say a priori what the probability of A depends on *really*? For all we know going in, it might depend on a, b, L, B, the price of tea in china, and the color of my socks. In principle, physically speaking, we have to have some kind of argument that we've captured all the possibly-relevant variables. I can't put my finger on it yet, but you are somehow sneaking in a physical assumption -- outcome independence -- under the guise of the Kolmogorov formalism. Let's see if it emerges below...



In fact, P(A|B) is nothing else but P(A sect B) / P(B) ; so it is a derived concept. Saying that P(A|B) = P(A) just comes down to saying that
P(A sect B) = P(A) x P(B). We usually write P(A sect B) as P(A,B).
It makes perfectly sense to talk about P(A) and about P(A,B). These are numbers which are well defined if the probability distribution is well defined (meaning, the parameters which select the distribution from its family are fixed, in our case a, b and L).

OK, this is all fine.



As I said, the parameters select ONE of different probability distributions out of a family. Once we have our distribution, we can apply it to M. A conditional probability within this distribution is then nothing else but a shorthand for a fraction of two measures of this distribution.
You cannot write P(A ; a) = P(A and a) / P(a) if a is a parameter.

Yes, OK, because P(a) is meaningless if we're treating "a" as a freely-choosable variable. This all makes sense. Of course, I'm still worried that you're going to have to treat "L" exactly the way you say you can't treat "a" here, since (at least in some theories) "L" might not be freely-choosable. But let's see below if this actually comes up in any important way...


You can however, write very well P(A |B ) = P(A sect B) / P(B) ; it is its very definition.

Yes, sure.


[Bell locality] is to be re-written:
P(A|B ; a,b,L) = P(A ; a,L)

Yes, sure, mod my worry about "L".


OK, here's the real meat of your last post finally:

By definition, we have: P(A|B ; a,b,L) = P(A,B ; a,b,L) /P(B ; a,b,L)

Now, if we rewrite my "Bell condition" which is:
P(A,B ; a,b,L) = P(A ; a,L) x P(B ; b,L), together with the fact that P(B ; a,b,L) = P(B ; b,L) (does not depend on parameter a - that's information locality to me), and we fill it into the definition of P(A|B ; a,b,L) above,
we have:

P(A|B ; a,b,L) = P(A ; a,L) x P(B ; b,L) / P(B ; b,L) = P(A ; a,L) and we're home:

both statements are equivalent.

OK, this is fine. So "Bell Locality" and the def'n of conditional probability together yeild that

P(A|B ; a,b,L) = P(A ; a,L)

Right? But is this new? I mean, this is just exactly what I would have said before (not making any distinction between "parameter sets" and "variables we conditionalize on") by writing:

P(A|B,a,b,L) = P(A|a,L)

Right? So I don't think there's really anything new here, neither a new point nor something new and important that follows from your different math notation. But I'm sure you'll correct me if I'm missing something here.


Re: my worry that what you were saying was violated by OQM, you said:

I'm sorry that I misformulated this: I didn't mean to imply that in just any Kolmogorov system you have to have this factorisation of course ! What I meant to say (and I badly expressed myself) was:
P(A ; a,b,L) is perfectly well defined. You do not have to say that the expression is somehow "incomplete" because I didn't include B in the list to the right. I could have been talking about ANOTHER quantity P(A|B ; a,b,L) ; only, I didn't talk about it, I didn't need it, because I only wanted to demonstrate P(A,B | a,b,L) = P(A ; a,L) x P(B ; b,L).
That's a perfectly sensible statement, and the three quantities are well defined in just any Kolmogorov system (the equality, of course, is not always true and has to be demonstrated for the case at hand).
I could also talk about things like P(A|B ; a,b,L) and so on, but I simply didn't need to. It is not an ERROR to talk about P(A | a,b,L) and in doing so I do not make any assumption. That's what I put badly as "it is build into the axioms of probability theory".

OK, I don't see any problem with this. I mean, it's certainly true that in OQM it is possible to talk about P(A;a,b,L). It is, for example, 50% (indpendent of "a" and "b" if L is the singlet state and we're talking about the usual EPR/Bell situation).



That is correct. There could be of course a conspiracy that L compensates for every change in a that I make. I assume of course same L.

OK, but then keep in mind that your identification of parameter independence with signal locality is conditioned (ha ha ha) on this assumption. In fact, a violation of PI is not sufficient to establish violation of signal locality. You also need controllability of the state L. (How to formulate exactly how much controllability is needed, I'm not sure...??) That is,

NOT(Signal Locality) ==> NOT(Parameter Indpendence) + L-Controllability

Or: Inadequate L-controllability ~OR~ Parameter Independence is needed to have Signal Locality. Bohm gets it the first way, OQM the second.


Ok, we've had this discussion already a few times. Because the statistical predictions of both theories are identical, there's no discrimination between both on those "black box" outcomes of course. It is a matter of esthetics of the inner workings. If you need to write that the state HERE is directly a function of the state (or its rate of change) THERE, in the equations, then this thing is not considered local, even if what you crank out of it doesn't see the difference.

Of course. And my point is just that both OQM and Bohm violate this -- both theories require (in some form) the state HERE to depend on the state THERE, in the equations. In Bohm, the offending equation is the guidance formula; in OQM it's the collapse postulate.

You don't disagree with that, do you? Partly I keep saying the same thing over and over again because you and the others keep finding new ways to subtly reject what I thought had earlier been agreed upon!

OK, here's the other meaty part of your post:

Again, I don't care about "outcome independence". I didn't need conditional probabilities at all. I needed to SHOW that P(A,B) factorizes into P(A) x P(B). This can be rewritten into something that uses outcome independence if you like, but I don't care.
What I wanted to show was that from determinism (all probabilities are 1 or 0), and from information locality (P(A;a,b,L) = P(A ; a,L) and P(B ; a,b,L) = P(B;b,L) ) follows the factorization statement that is Bell locality:
P(A,B ; a,b,L) = P(A ; a,L) x P(B ; b,L).

OK, this all clarifies for me what you were doing in that previous post. So you're just saying that deteminism permits us to write

P(A,B ; a,b,L) = P(A ; a,b,L) x P(B ; a,b,L)

and then we can impose Parameter Independence and get that

P(A,B ; a,b,L) = P(A ; a,L) x P(B ; b,L)

Unfortunately, I can't see any way to object to that. :cry: :smile: (At least, not right now.)

So where does this leave us? If Bell Locality is equivalent to the conjunction of OI and PI -- and also to the conjunction of PI and Determinism -- does that mean that Outcome Independence is equivalent to Determinism? That's a surprising and kind of interesting conclusion I guess.

But I'm still not sure what this means in terms of interpreting Bell's Theorem. It seems only to show that for a deterministic theory, violation of PI is sufficient for violation of Bell Locality. So if you know that a given theory is deterministic, you don't have to check for Bell Locality to see if it's going to obey Bell's Inequality -- you can simply check for Parameter Dependence.

But you want to say something like this means what Bell was really adding to relativity's "no signalling" condition is an unwarranted desire for determinism? I don't see that, especially considering Signal Locality and PI aren't the same thing. Well, now that we're on the same page about what you actually showed here, I'm sure you can help me understand how you want to interpret it...
 
  • #179
ttn said:
But then here you say that L can also be thought of as a freely chosen parameter. Well, maybe, maybe not. Again, this is something we can't just stipulate a priori but, rather, have to find out from a given theory.

L is the "complete description of the state according to the theory at hand". In quantum theory, L is just the wave function, indeed. In another theory, it is whatever describes the state completely.

The theory we're judging will tell us whether it is or isn't possible to *prepare* a system with a specific, desired state. According to OQM, for example, this is possible. But according to Bohmian Mechanics it isn't.

I would say that that is a theory that has a serious problem. In fact, if that is the case, then the L is not the "state of the system" ; the state should then be less well specified, but "preparable" and the stochastic effects of what is fundamentally uncontrollable should be part of the probability distribution, and not of the state L. At least if it is IN PRINCIPLE impossible to prepare the system that way, and not highly unpractical (such as, say, the phase space point of a classical gas).

So you can't just assume that "L" is one of the freely chosen parameters the way "a" and "b" are.

I think it is somehow a problem if IN PRINCIPLE you cannot prepare freely the "state" of the system L. Because what is a "state" then ? Isn't it just part of the stochastic description then ? But ok, we can do away with this objection by giving ourselves a god status who is not bothered by this, and who CAN decide upon L.

This is an elementary point, but I'm worried that this is going to mean that (for some theories at least) "L" is an "uncontrollable" that is therefore in the same category as "A" and "B". I mean, isn't that the distinction you're making above?

It shouldn't. Of course, you can give a DISTRIBUTION to L, and redefine all your probability distributions by integrating over the uncontrollable parts of L. That just took out part of L and put it into the distribution P.

Controllable (ie freely choosable) variables constitute the "parameter set", and the uncontrollables are to be thought of as the "outcomes" -- the things we talk about the probability *of* given the parameter set.

Right, but they aren't even part of the "outcomes", they are part of the function P(). Of the distribution.

I don't see that. In fact, you can't just talk about "P(A)" without specifying (using your terminology) the parameter set.

It was tacitly assumed that we fixed a,b and L.

But then, who are you to say a priori what the probability of A depends on *really*? For all we know going in, it might depend on a, b, L, B, the price of tea in china, and the color of my socks. In principle, physically speaking, we have to have some kind of argument that we've captured all the possibly-relevant variables. I can't put my finger on it yet, but you are somehow sneaking in a physical assumption -- outcome independence -- under the guise of the Kolmogorov formalism.

No, if we have a theory that gives us the probability of A, then it is just that. We can now try to find out if there are CONDITIONAL probabilities the way you suggest (the price of the tea in china and so on - as long as they are part of the set of events M), but we're not interested in that. If this bothers you, think of P(A) as the probability of A, weighted with all its possible "dependencies" according to the probabilities of the dependencies.

After all, there's a theorem in probability theory that says:

if {B1,B2,...Bn} are mutually exclusive and complete (their union is Omega), then:

P(A) = P(A|B1) P(B1) + P(A|B2) P(B2) + ... P(A|Bn) P(Bn)

Think of B1 = 1 kg of tea in china costs $1.0 ; B2 = 1kg of tea in china costs $2.0 etc... :-)


Yes, OK, because P(a) is meaningless if we're treating "a" as a freely-choosable variable. This all makes sense. Of course, I'm still worried that you're going to have to treat "L" exactly the way you say you can't treat "a" here, since (at least in some theories) "L" might not be freely-choosable.

Again, two ways out. 1) I'm god and I can choose L freely. 2) Include in L only the parts I can choose freely, consider the uncontrollable parts simply as part of the probability distribution.

Right? But is this new? I mean, this is just exactly what I would have said before (not making any distinction between "parameter sets" and "variables we conditionalize on") by writing:

P(A|B,a,b,L) = P(A|a,L)

Right? So I don't think there's really anything new here, neither a new point nor something new and important that follows from your different math notation. But I'm sure you'll correct me if I'm missing something here.

What is new (or maybe not), is that if you only assume:
P(A ; a,b,L) is not a function of b
P(B ; a,b,L) is not a function of a

(this is information locality, right?)

and you assume determinism:
P maps only onto {0,1}

that you can DERIVE P(A,B ; a,b,L) = P(A ; a,L) x P(B ; b,L)

(Bell locality).

If you do not assume determinism, you cannot do so.
If you do not assume information locality, you can still write the product, but P(A;a,b,L) will be there and you still don't have Bell Locality, because P(A) depends still on a,b and L.

Now, this, together with the other theorems (namely that from Bell locality you can have information locality (trivial) and that from Bell locality you can always find a deterministic underlying theory (in the god assumption, eventually) that is equivalent to it, and deterministic, I arrived at my final conclusion that Bell locality is equivalent to information locality AND underlying determinism.

Maybe all this is known already since ages. I would think so !

OK, I don't see any problem with this. I mean, it's certainly true that in OQM it is possible to talk about P(A;a,b,L). It is, for example, 50% (indpendent of "a" and "b" if L is the singlet state and we're talking about the usual EPR/Bell situation).

You got it. In fact, this independence already means information locality in this particular case. And even if L is any state (depending on 4 complex numbers u,v,w,x: u|+>|+> + v |+>|-> + w|->|+> + x|->|->), P(A) will be a number depending on a (the orientation of analyser a) and L but will not depend on b.

OK, but then keep in mind that your identification of parameter independence with signal locality is conditioned (ha ha ha) on this assumption. In fact, a violation of PI is not sufficient to establish violation of signal locality. You also need controllability of the state L. (How to formulate exactly how much controllability is needed, I'm not sure...??) That is,

NOT(Signal Locality) ==> NOT(Parameter Indpendence) + L-Controllability

Or: Inadequate L-controllability ~OR~ Parameter Independence is needed to have Signal Locality. Bohm gets it the first way, OQM the second.

Again, one should think hard what it means, if a "state" L is *in principle* uncontrollable. In what way can we then say that it is a *different* state ? Shouldn't we just extract what is in principle (even if not in practice) controllable, and only use that as a state ; saying that the fundamentally random part, is, well, fundamentally random, and not "part of an uncontrollable part of the state" but just part of the probability distribution ?
Isn't this similar to trying to distinguish fundamentally identical particles ?

So where does this leave us? If Bell Locality is equivalent to the conjunction of OI and PI -- and also to the conjunction of PI and Determinism -- does that mean that Outcome Independence is equivalent to Determinism? That's a surprising and kind of interesting conclusion I guess.

That doesn't follow logically ; it isn't because

A AND B <==> A AND C that B <==> C !

It seems only to show that for a deterministic theory, violation of PI is sufficient for violation of Bell Locality. So if you know that a given theory is deterministic, you don't have to check for Bell Locality to see if it's going to obey Bell's Inequality -- you can simply check for Parameter Dependence.

Right.

But you want to say something like this means what Bell was really adding to relativity's "no signalling" condition is an unwarranted desire for determinism?

I think that that was the idea. Or an unwanted consequence :-)

I don't see that, especially considering Signal Locality and PI aren't the same thing.

Well... I'd say they are. I don't see the use of postulating a fundamental impossibility of fixing L in principle. And then they are equivalent, no ?
 
  • #180
ttn said:
...because any non-deterministic theory can be made into a deterministic one by adding more hv's...

Assuming that you are defining one theory as relatively "more" deterministic than another - this requires a couple of extra points to be added.

1. When comparing two theories, I believe it is fair to define a theory (Y) as being objectively BETTER than another (X) IF its predictive results are more accurate/descriptive. So I am mapping your concept of MORE DETERMINISTIC with my concept of BETTER. Is this the sense you intended?

2. Assuming you agree with this mapping, I would then agree that for any such Y, it must always have more input variables (previously hidden variables) than X.

3. Adding input variables to X will not necessarily lead to a BETTER theory Y. If it doesn't, then Y is an AD HOC theory.

On the other hand, if you are saying that adding hidden variables to a non-deterministic theory such as QM will yield a deterministic theory... I would challenge that sense of your statement. You would first need to find such hidden variables to be convincing.
 

Similar threads

Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 6 ·
Replies
6
Views
481
Replies
58
Views
4K
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 109 ·
4
Replies
109
Views
6K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
5
Views
3K
  • · Replies 19 ·
Replies
19
Views
2K