Hi mn4j, apologies for not replying to your last post before now, I started it a while ago but realized it would require a somewhat involved response, so I kept putting off writing it for weeks. Anyway, I've finally finished it up:
mn4j said:
You still have not said why it should not be separate. It would seem that if Bell's proof was robust, it should be able to accommodate hidden variables at the sources in addition to source parameters.
It is able to do so. I already said "the hidden variables can be included in \lambda"--did you miss that, or are you not understanding it somehow?
mn4j said:
It should tell you a lot that the hidden variables must be defined a specific way in order for the proof to work.
Physically the hidden variables can be absolutely anything, but for the proof to work you do need to assign them separate variables from the experimental choices. This is like just about any proof where you can't redefine terms willy-nilly and expect it to still make sense. If the proof is mathematically and logically valid, then you have to accept the conclusions follow from the premises, you can't somehow object to it on the basis that you wish the symbols meant different things than what they are defined to mean.
mn4j said:
Since you are the one claiming that Bell's proof eliminates all possible local-hidden variable theorems, the onus is on you to explain why the stations should not be able to get separate local hidden variables.
Do you understand the difference between "the stations should not be able to get separate local hidden variables" and "there can be hidden variables associated with the stations, but the symbols used to refer to them should be separate from the symbols used to refer to the experimenters' choice of measurements angles"? Remember, each value of \lambda is supposed to stand for an
array of values for all the hidden variables--we are supposed to have some function that maps values of \lambda to a (possibly very long) list of values for all the different physical variables which may be in play, like "\lambda=3.8 corresponds to hidden variable #1 having value x=7.2 nanometers, hidden variable #2 having value 0.03 meters/second, hidden variable #3 having value 34 cycles/second, ... , hidden variable #17,062,948,811 having value 17 m/s^2", something along those lines. There's no reason at all why the long list of values included in a given value of \lambda can't be values of hidden variables associated with the measuring-device.
mn4j said:
You don't recognize a simple wave equation?
Of course I recognize a wave equation--you weren't tipped off by the fact that I immediately suggested the idea of particles being bobbed along by an electromagnetic plane wave? My point was that I wanted to see a well-defined physical scenario, compatible with local realism, in which the equation would actually apply to physical elements with a spacelike separation, but no common cause in their mutual past light cone to explain why they were both obeying this equation (for example, in the example of two particles at different locations being bobbed up and down by an electromagnetic plane wave, the oscillations of the charges which generated this wave would lie in the overlap of the past light cones). However, I've since realized that they might both be synchronized just because of coincidental similarity in their initial conditions, so I've modified my comments about the relevance to Bell's theorem accordingly--see below.
mn4j said:
It is relevant. Especially since we know about wave-particle duality. It should tell you that we do not need psychokinesis to explain correlations between distant objects.
Of course wave-particle duality is part of QM, and you can't treat it as a foregone conclusion that QM itself is compatible with local realism.
JesseM said:
It should be obvious that in a relativistic universe, any correlation between events with a spacelike separation must be explainable in terms of other events in the overlap of their past light cones. If you disagree, please give a detailed physical model of a situation in electromagnetism (the only non-quantum relativistic theory of forces I know of) where this would not be true.
mn4j said:
You obviously have not thought it through well enough. Two objects can be correlated because they are governed by the same physical laws, whether or not they share a common past or not. This is obvious.
If two experimenters at a spacelike separation happen to choose to do the same experiment, then since the same laws of physics govern them they'll get correlated results--but this is a correlation due to the
coincidence of their happening to independently replicate the same experiment, not the type of correlation where seeing the results of both experimenters' measurements tells us something more about the system being measured than we'd learn from just looking at the results that either experimenter gets on their own. This does show that my statement above is too vague though, and needs modification. One way to sharpen things a little would be to specify we're talking about experiments where even with the same settings the experimenters can get different results on different trials, with the results being seemingly random and unpredictable; if we find that the results of the two experimenters are nevertheless consistently correlated, with a spacelike separation between pairs of measurements, this is at least strongly suggestive of the idea that each result was conditioned by events in the past light cones of the two measurements. But this is still not really satisfactory, because in principle there might actually be some hidden deterministic pattern behind the seemingly random results, and it might be that the two systems they were studying coincidentally had identical and synchronized deterministic patterns (for example, they might both be looking out a series of numbers generated by a pseudorandom deterministic computer program, with the programmers at different locations coincidentally having written exactly the same program without having been influenced to do so by a common cause in their mutual past light cone). So, back to the drawing board!
Let me try a different tack. Consider the claim I was making about correlations in a local realist universe earlier, which you were disputing for a while but then stopped after my post #51, so I'm not really sure if I managed to convince you with that post...here's the statement from post #51:
In a universe with local realist laws, the results of a physical experiment on any system are assumed to be determined by some set of variables specific to the region of spacetime where the experiment was performed. There can be a statistical correlation (logical dependence) between outcomes A and B of experiments performed at different locations in spacetime with a spacelike separation, but the only possible explanation for this correlation is that the variables associated with each system being measured were already correlated before the experiment was done ... Do you disagree? If so, try to think of a counterexample that we can be sure is possible in a local realist universe (no explicitly quantum examples).
If you don't disagree, then the point is that if the only reason for the correlation between A and B is that the local variables \lambda associated with system #1 are correlated with the local variables associated with system #2, then if you could somehow know the full set of variables \lambda associated with system #1, knowing the outcome B when system #2 is measured would tell you nothing additional about the likelihood of getting A when system #1 is measured. In other words, while P(A|B) may be different than P(B), P(A|B\lambda ) = P(A | \lambda ). If you disagree with this, then I think you just haven't thought through carefully enough what "local realist" means.
Note that I put some ellipses in the quote above, the statement I removed was "that the systems had 'inherited' correlated internal variables from some event or events in the overlap of their past light cones". I want to retract that part of the post since it does have some problems as you've pointed out, but I stand by the rest. The statement about "variables specific to the region of spacetime where the experiment was performed" could stand to be made a little more clear, though. To that end, I'd like to define the term "past light cone cross-section" (PLCCS for short), which stands for the idea of taking a spacelike cross-section through the past light cone of some point in spacetime M where a measurement is made; in SR this spacelike cross-section could just be the intersection of the past light cone with a surface of constant t in some inertial reference frame (which would be a 3D sphere containing all the events at that instant which can have a causal influence on M at a later time). Now, let \lambda stand for the complete set of values of
all local physical variables, hidden or non-hidden, which lie within some particular PLCCS of M. Would you agree that in a local realist universe, if we want to know whether the measurement M yielded result A, and B represents some event at a spacelike separation from M, then although knowing B occurred may change our evaluation of the probability A occurred so that P(A|B) is not equal to P(A), if we know the full set of physical facts \lambda about a PLCCS of M, then knowing B can tell us nothing
additional about the probability A occurred at M, so that P(A|\lambda) = P(A|\lambda B)?
If so, consider two measurements of entangled particles which occur at spacelike-separated points M1 and M2 in spacetime. For each of these points, pick a PLCCS from a time which is prior to the measurements, and which is
also prior to the moment that the experimenter chose (randomly) which of the three detector settings under his control to use (as before, this does not imply the experimenter has complete control over all physical variables associated with the detector). Assume also that we have picked the two PLCCS's in such a way that every event in the PLCCS of M1 lies at a spacelike separation from every event in the PLCCS of M2. Use the symbol \lambda_1 to label the complete set of physical variables in the PLCCS of M1, and the symbol \lambda_2 to label the complete set of physical variables in the PLCCS of M2. In this case, if we find that whenever the experimenters chose the same setting they
always got the same results at M1 and M2, I'd assert that in a local realist universe this must mean the results each of them got on any such trial were already predetermined by \lambda_1 and \lambda_2; would you agree? The reasoning here is just that if there were any random factors between the PLCCS and the time of the measurement which were capable of affecting the outcome, then it could no longer be true that the two measurements would be guaranteed to give identical results on every trial.
Now, keep in mind that each PLCCS was chosen to be prior to the moment each experimenter chose what detector setting to use. So,
if we assume that the experimenters' choices were uncorrelated with the values of physical variables \lambda_1 and \lambda_2, either because the choice involved genuine randomness (using the decay of a radioactive isotope and assuming this is a truly random process, for example), or because the choice involved "free will" (whatever that means), then if it's true that \lambda_1 and \lambda_2 predetermine the result on every trial where they happen to make the choice, in a local realist universe we must assume that on each trial \lambda_1 and \lambda_2 predetermine what the results
would be for any of the three choices each experimenter can make, not just the result for the choice they do actually make on that trial (since the values of physical variables in the PLCCS cannot 'anticipate' which choice will be made at a later time), the assumption known as
counterfactual definiteness. And if at the time of the PLCCS there was already a predetermined answer for the result of either of the three choices the experimenter could make, then if they always get the same results when they make the same choice, we must assume that on every trial the two PLCCSs had the
same predetermined answers for all three results, which is sufficient to show that the Bell inequalities should be respected (see my post #3). It would be
simplest to assume that the reason for this perfect matchup between the PLCCSs on every trial was that they had "inherited" the same predetermined answers from some events in the overlap of the past light cones of the two measurements, but this assumption is not strictly necessary.
The deterministic case
If the experimenters' choices are not assumed to be truly random or a product of free will, but instead are pseudorandom events that do follow in some deterministic (but probably chaotic) way from the complete set of physical variables in the PLCCS, then showing that the results for
each possible measurement must be predetermined by the PLCCS is trickier. I think we can probably come up with some variant of the "no-conspiracy" assumption discussed earlier that applies in this case, though. To see why it would seem to require a strange "conspiracy" to explain the perfect correlations in a local realist universe without the assumption that there was a predetermined answer for each possible choice (i.e. without assuming counterfactual definiteness), let's imagine we are trying to perform a computer simulation to replicate the results of these experiments. Suppose we have two computers A and B which will simulate the results of each measurement, and a middle computer M which can send signals to A and B for a while but then is disconnected, leaving A and B isolated and unable to communicate at some time t, after which they simulate both an experimenter making a choice and the results of the measurement with the chosen detector setting. Here the state of the information in each computer at time t represents the complete set of physical variables in the PLCCS of the measurement, while the fact that M was able to send each computer signals prior to t represents the fact that the state of each PLCCS
may be influenced by events in the overlap of the past light cone of the measurement events.
Also, assume that in order to simulate the seemingly random choices of the experimenters on each trial, the computer uses some complicated pseudorandom algorithm to determine their choice, using the complete set of information in the computer at time t as a http://www.lycos.com/info/pseudorandom-number-generator--seeds.html so that even in a deterministic universe, everything in the past light cone of the choice has the potential to influence the choice. Finally, assume the initial conditions at A and B are not identical, so the two experimenters are not just perfect duplicates of one another. Then the question becomes: is there any way to design the programs so that the simulated experimenters always get the same outcome when they make the same choice about detector settings, but counterfactual definiteness does
not apply, meaning that each computer didn't just have a preset answer for each detector setting at time t, but only a preset answer for the setting the simulated experimenter would, in fact, choose on that trial? Well, if the computer simulations are deterministic over multiple trials so we just have to load some initial conditions at the beginning and then let them run over as many trials as we want, rather than having to load new initial conditions for each trial, then in principle we could imagine some godlike intelligence looking through
all possible initial conditions (probably a mind-bogglingly vast number, if N bits were required to describe the state of the simulation at any given moment there'd be 2^N possible initial conditions), and simply picking the very rare initial conditions where it happened to be true that whenever the two experimenters made the same choice, they always get the same results. Then if we run the simulation forward from those initial conditions, it will indeed be guaranteed with probability 1 that they'll get the same results whenever they make the same choice, without the simulation needing to have had predetermined answers for what they
would have gotten on these trials if they had made a different choice. But this preselecting of the complete initial conditions, including all the elements of the initial conditions that might influence the experimenters' choices, is exactly the sort of "conspiracy" that the no-conspiracy assumption is supposed to rule out.
So, let's make some slightly different assumptions about the degree to which we can control the initial conditions. Let's say we do have complete control over the data that M sends to A and B on each trial, corresponding to the notion that we want to allow the source to attach hidden variables to the particles it sends to the experimenters in any fiendishly complicated way we can imagine. If you like we are also free to assume we have complete control over any variables, hidden or otherwise, associated with the measuring-devices being simulated in the A and B computers initially at time t (after M has already sent its information to A and B but before the simulated experimenters have made their choice), to fit with your idea that hidden variables associated with the measuring device may be important too. But assume there are other aspects of the initial conditions at A and B that we don't control--perhaps we can only decide what the "macrostate" of the neighborhood of the two experimenters looks like, but the detailed "microstate" is chosen randomly, or perhaps we can decide the values of all non-hidden variables in their neighborhood but not the hidden ones (aside from the ones associated with the particles sent by the source and the measuring devices, as noted above). Since the pseudorandom algorithm that determines each experimenter's choice takes the
entire initial state as a seed, this means that without knowing every single precise detail of the initial state, we can't predict what choices the experimenters will make on each trial. So, for all practical purposes this is just like the situation I discussed earlier where the experimenters' choices were truly random and unpredictable, which means that if we only control some of the initial data at time t (the variables sent from M and the variables associated with the measuring-device) but after that must let the simulation run without any further ability to intervene, the only way to guarantee that the experimenters always get the same result when they make opposite choices is to make sure that the data we control at time t guarantees with 100% certainty what results the experimenters would get for
any of the three possible choices, in such a way that the predetermined answers match up for computer A and computer B.