Long-distance correlation, information, and faster-than-light interaction

  • #51
Ken G said:
Another approach is to simply say that the measurements done on the system are part of what determines the probability distributions of those measurements. This is, after all, just what we see with entangled systems, it was only ever us that said these systems had to comprise of independent probability distributions. We know they do not, so why force "influences" down their throat, when we can just say what we see to be true: the probabilities depend on the measurements we choose because part of what we mean by the system is how the outcomes of chosen measurements interact with the preparation. That's really all physics ever concerns itself with, the idea that you need influences to alter the independent probabilities is merely a holdover from other contexts where that actually works.

First of all, entanglement is never about perfect correlations, because you don't get Bell violations with perfect correlations. But a more important issue is, I never said the theory doesn't violate locality, of course it does that. I said it doesn't need to violate it by application of a concept of "FTL or retrocausal influences." One simply says that to talk meaningfully about a system, we need more than its preparation, we need to know what attributes of that system are being established by measurement, and the probabilities it will exhibit depend on the way the measurements establish those attributes. Again, outcomes are an interaction between preparation and measurement choices, no influences needed, FTL or otherwise. I'd say that same thing even if the speed of light was infinite-- there's no magical "influence" there either way, its causality violations are just a good clue we made a bad choice of language.
Well yes, there is another explanation that works. It's called superdeterminism. Unfortunately it's non-scientific. If that is what you mean (as it seems to me) then it's no real alternative to FTL.

Btw As you refer to FTL and retrocausal influences together do you consider them equally unacceptable?
 
Physics news on Phys.org
  • #52
DrChinese said:
... claims to match QM and is "local realistic" is going to be a violation of PhysicsForums guidelines regarding speculation and unpublished works (ie what is an acceptable source)...

I'm sorry, I will try to be more careful. I actually had not even checked if it did match QM till you mentioned it. I was pleasantly surprised to hear you say it did.

I have been concentrated on the process of constructing the matrices A, B, E, and F. The only reason to run the animation is to generate the matrices, but it is nice to highlight the time dependent things that are happening, ie 1/ random photon shot, 2/ Bob/Alice detectors randomly set, 3/ Detection recorded in the matrices. Repeat until done.

I found the E matrix interesting as I had not realized the importance of the two 0's in the E matrix in the Bell test until I did the animation.

Also, I have converted the animation to do only "statistical" photons (ie. it was stage 1,2,3 was classical, 4 was statistical. Now only 3 stages all statistical). Now you can see some numbers showing up in matrix E when doing the single shot animation.

BTW, It would be great if you were interested to try the functions in post #44 and generate a few sets of matrices to have a look at :)
 
  • #53
zonde said:
Well yes, there is another explanation that works. It's called superdeterminism.
No, what I just described is not superdeterminism. What I just described is pure science. Let me repeat it to prove that.

In scientific thinking, any "system" can be regarded as a kind of machine for connecting a preparation with a set of outcomes for any set of measurements, where what I mean by "outcomes" can include probability distributions and not just definite predictions. That's it, that's what I said. There is no scientific requirement whatsoever that the "system" be able to be analyzed in terms of parts, it just happens that this is often possible. There is no scientific requirement whatsoever that the probability distributions be independent of each other, in the sense that the preparation must determine what those distributions are, independent of the actual set of measurements that are used-- it just so happens that this is often possible. And finally, there is no scientific requirement whatsoever for us to say that whenever we find correlations between measured outcomes that depend on what measurements are done, there has to be "influences" that propagate across the system, though again, there are situations where that is a useful way to think of what is happening (generally situations where the "influences" are slower than light and could in principle be intercepted, i.e., when the influences are themselves part of the measurable system). In all cases, what is happening here is simply that models that have worked for us in various contexts, models like "parts" and "independence" and "influences", are simply being mistaken for scientific requirements. But science is not defined by the model, it is defined by the method.
Btw As you refer to FTL and retrocausal influences together do you consider them equally unacceptable?
Yes, I see no particularly important distinction between crossing one light-cone boundary versus crossing two of them. Either way, it's not the way the time-honored model we call an "influence" works. My main objection to that language is it demonstrably forces people to conclude that a behavior they are sure will happen is "weird." That's the signpost of a failing picture. What is weird is supposed to be what we do not expect, rather than what we do expect.
 
Last edited:
  • #54
Ken G said:
Let me repeat it to prove that.

I really can't understand you Ken G. Could you address my question directly?

I don't even know if I understood your premises here but this is what it seems to me that you're saying: the resulting probability distribution depends on the holistic system (which includes the measurement instruments). I've made a picture:

uhDH7Ub.jpg


It doesn't matter if the polarizers are part of the holistic system: sure, rotating one changes the outcome for probability distribution, but also at the other side. A choice here impacts there, because the system is extended in space. You can say the whole system changes nonlocally, so to avoid using the term FTL. But it's only a nominal difference. It may be verbally nicer.

Am I missing something?
 
  • #55
Ken G said:
There is no scientific requirement whatsoever that the "system" be able to be analyzed in terms of parts, it just happens that this is often possible.
If we can make observations in terms of parts then of course we can analyze these observations.

Ken G said:
There is no scientific requirement whatsoever that the probability distributions be independent of each other, in the sense that the preparation must determine what those distributions are, independent of the actual set of measurements that are used-- it just so happens that this is often possible.
Actually no superdeterminism assumptions requires that preparation is independent from reasonably randomized measurement settings.

Ken G said:
And finally, there is no scientific requirement whatsoever for us to say that whenever we find correlations between measured outcomes that depend on what measurements are done, there has to be "influences" that propagate across the system, though again, there are situations where that is a useful way to think of what is happening (generally situations where the "influences" are slower than light and could in principle be intercepted, i.e., when the influences are themselves part of the measurable system).
There is such requirement. That requirement is called falsifiability. That means there should be valid observation that could indicate FTL "influence" and falsify "no FTL" hypothesis even if we can't create that FTL phenomena artificially.

Ken G said:
In all cases, what is happening here is simply that models that have worked for us in various contexts, models like "parts" and "independence" and "influences", are simply being mistaken for scientific requirements. But science is not defined by the model, it is defined by the method.
Science is as well defined by requirements that scientific models have to fulfill.

Ken G said:
Yes, I see no particularly important distinction between crossing one light-cone boundary versus crossing two of them. Either way, it's not the way the time-honored model we call an "influence" works. My main objection to that language is it demonstrably forces people to conclude that a behavior they are sure will happen is "weird." That's the signpost of a failing picture. What is weird is supposed to be what we do not expect, rather than what we do expect.
Are your objections about wording or what?

There are two theories and what we expect from one theory contradicts what we consider possible in the other theory. If you say this signpost of a failing picture ... well I agree.
 
  • #56
ddd123 said:
You can say the whole system changes nonlocally, so to avoid using the term FTL. But it's only a nominal difference. It may be verbally nicer.

Am I missing something?
Yes. The concept of "FTL" involves the concept "faster." That involves the concept of speed, which involves the concept of movement or propagation. Also, when people use the term FTL, as in this thread, they generally couple it to the term "influence." So this is what I am saying has no need to be present in entanglement: the concept of a propagating influence. That's why I said it's not the "FTL" itself I object to, if all you mean by that is once you've embraced realism, you are stuck with nonlocality-- that is what we know to be true. But we don't have to leave it at that, if we still have something that seems "weird"-- we have something else too. We have the repudiation of the idea that a system is made of parts that connect via "propagating influences". That latter notion is simply a view that often works for us, so we mistake it for some kind of universal truth, or worse, a universal requirement for scientific investigation.

So when you say the system "changes nonlocally" when a measurement choice is made, even that implies a hidden assumption. You see that the measurement is applied at one place and time, and the system changes everywhere. But that isn't supported by the data either. The most general meaning of "a system" is a mapping between a preparation and a set of measurement probability distributions-- after all, that's all that is involved in empirical science. Notice that if you define the "system" that way, it never "changes" nonlocally! To get rid of "nonlocality", one merely needs to define a system the way I just did, and poof, it's gone. The nonlocality comes from requiring that the system must exist independently of the measurements that establish its attributes, that's why it is actually "nonlocal realism," i.e., the combination of locality and realism, that is ruled out-- not either one individually. Personally, I find it much more scientific to define a system the way I just did, but this does not get called "realism" because the measurements are included in the meaning of the system.
 
  • #57
I don't understand the definition. One thing is not slicing the system into parts in the sense that they're not taken to be independent; another is ignoring spatiotemporal extension and events. Are you giving up the notion of spacetime events? What do you substitute to that?
 
  • #58
zonde said:
If we can make observations in terms of parts then of course we can analyze these observations.
What do you mean by "observations in terms of parts?" What we have are observations at various places and times, that's it-- that's not parts. The "parts" are your personal philosophy here, nothing more. Now, it's certainly not a rare philosophy, and the "parts" model certainly works very well for us in almost all situations. But not all, that's the point. Hence, we should question that personal philosophy about observations of systems. It is the observations that have a place and time, not the "parts of the system." That's why I brought up white dwarf stars, they make this point really clear, when you have something the size of a planet that cannot be analyzed in terms of "observations on parts" because the "parts" are indistinguishable from each other.

Actually no superdeterminism assumptions requires that preparation is independent from reasonably randomized measurement settings.
I haven't said anything about superdeterminism other than that I am not talking about superdeterminism, nor have you shown that what I am saying is tantamount to superdeterminism.
There is such requirement. That requirement is called falsifiability.
Then you just vioalated falsifiability with your first sentence above. What is falsifiable about the claim that observations at a place and time are "observations in terms of parts", if entanglement is not a falsification of that claim, and indistinguishability in white dwarfs is not a falsification of that claim? Anyone who thinks observations are "observations in terms of parts" (which I admit, is pretty much everyone before they learn quantum mechanics) is going to find both entanglement and fermionic indistinguishability "weird." But weirdness is not falsification, it is more like repudiation.

That means there should be valid observation that could indicate FTL "influence" and falsify "no FTL" hypothesis even if we can't create that FTL phenomena artificially.
Are you saying there is such a valid observation? If not, it means we are deciding between personal philosophies, and I've never claimed otherwise.
There are two theories and what we expect from one theory contradicts what we consider possible in the other theory. If you say this signpost of a failing picture ... well I agree.
I don't see what two contradicting theories you have in mind here. If anyone had been talking about slower-than-light influences that can be intercepted and studied as part of the system, then we'd be talking about a theory that makes different predictions than quantum mechanics, but no one is talking about a theory like that.
 
  • #59
To me the white dwarf example is absolutely the same, so I kept using the EPR one. You remove an atom from one side, suddenly, placing the atom on the other side becomes possible. It's exactly the same as in my picture above, with an atom placer-remover instead of the polarizer.
 
  • #60
ddd123 said:
I don't understand the definition. One thing is not slicing the system into parts in the sense that they're not taken to be independent; another is ignoring spatiotemporal extension and events. Are you giving up the notion of spacetime events?
I have in no way given up the notion of spacetime events, I have merely noticed what is in evidence: the spacetime events are the observations, not the parts of the system. The parts of the system needs to mean nothing more than what the system is composed of, which is all part of the preparation. If you do an observation at a given place and time, it is certainly important that the observation occurred at a given place and time, and you will never need to use any other concept of "parts." Once the measurement is made, you can regard that as a new preparation, a new system, and a new mapping from that total preparation to new outcomes. Those new probabilities can change instantly because your information has changed instantly, that's "Bertlmann's socks." Nonlocality only appears when you combine with realism, i.e., combine with the picture that there exists a unique probability distribution that goes with each measurement independently of all other measurements outside the past light cone of that one. Local realism says you might not know what that unique distribution is, and any new information you get that constrains the preparation could cause you to reassess that unique probability, but you hold that it still exists unchanged. But if you don't think of a system as a list of unique probabilities, but rather a machine for connecting a preparation with a set of probabilities associated with any set of measurements you could in principle do, then you have no problems with nonlocality.
 
  • #61
ddd123 said:
To me the white dwarf example is absolutely the same, so I kept using the EPR one. You remove an atom from one side, suddenly, placing the atom on the other side becomes possible. It's exactly the same as in my picture above, with an atom placer-remover instead of the polarizer.
Yes, I agree we have the same issue in that example, I just think it is an example where it is even more clear that the "propagating influence" philosophy is not a good one. The reason I say it is "not good" is that it leads us to imagine what we know will happen to be "weird." That's what I mean by a "not good' philosophy. There are not individual electrons in a white dwarf, so they don't have locations, they are indistinguishable from each other. What has a location is the measurements that we do on a sea of electrons.
 
  • #62
Even if you consider the set of measurements as a mapping's input, the elements of this set correspond to spacetime events: so a probability distribution output, as observed in one spacetime region, depends on the input in another instantaneously.

Ken G said:
What has a location is the measurements that we do on electrons.

Right, that's why it doesn't change anything for me. Even without realism you still have nonlocality without relaxing other assumptions (like forward causality).
 
  • #63
ddd123 said:
Even if you consider the set of measurements as a mapping's input, the elements of this set correspond to spacetime events: so a probability distribution output, as observed in one spacetime region, depends on the input in another instantaneously.
There's nothing strange about that, it happens with a pair of socks. But what is significant is that for the socks, the nonlocal change in the probability distribution only appears to be nonlocal if you think the probability distribution exists at the location of the measurement. If you recognize that the location of a probability distribution is in the brain of the person using it, again there is never any nonlocality. My point is it is very easy to make nonlocality go away, all you need to reject is nonlocal realism, that combination that allows us to imagine a set of independent parts that have probability distributions carried around with the parts, and subject to subluminally propagating influences. That picture doesn't work well at all for either white dwarfs or EPR experiments, because the "propagating influences" have to do things like propagate back in time. So I say, jettison that picture, and generalize the meaning of a system to be nothing more than we can observe it to be: a mapping from a preparation to a set of outcomes associated with each hypothetical set of measurements we could do. Nothing more, and nothing less. If you just say that, there's never any nonlocality. Personally, I don't think it's "weird" to stick with what we actually need, especially when we get no nonlocality when we do that.
Right, that's why it doesn't change anything for me. Even without realism you still have nonlocality without relaxing other assumptions (like forward causality).
No, you never get nonlocality without realism. That's why the Bell theorem is said to rule out local realism, it is not said to rule out locality.
 
Last edited:
  • #64
Ah I think I'm understanding you more now. But for me it's not that the probability distribution exists at the spacetime location. What exist in the spacetime region is the measurement results. The probability distribution is asymptotically reconstructed with a large number of events. The hypothetical concerning measurement choices here involves a different observed result there. Probability distribution is a conclusion, the physical events are laid out and the hypotheticals involved display an instantaneous "if here then there".

BTW I wasn't talking about Bell's theorem.
 
  • #65
Ken G said:
That's why the Bell theorem is said to rule out nonlocal realism
You mean it rules out local realism? It proves that no local variables can even possibly meet outcomes observed.
 
  • #66
ddd123 said:
Ah I think I'm understanding you more now. But for me it's not that the probability distribution exists at the spacetime location. What exist in the spacetime region is the measurement results.
That's fine, you can have the measurement results exist at their location, and still keep locality. In fact, that's how you keep locality-- measurements have the location of the measurement, predictive probabilities have the location of the brain using them. Stick to that, and add nothing more that you don't need, and you get no nonlocality in either white dwarfs or EPR experiments.
The hypothetical concerning measurement choices here involves a different observed result there. Probability distribution is a conclusion, the physical events are laid out and the hypotheticals involved display an instantaneous "if here then there".
There are two types of probability distributions. One is an expectation in the mind of the physicist, which is to be checked, the other is a distribution of outcomes, which is not actually a probability distribution but it is what the probability distribution is to be checked against, so in the sense of comparing apples to apples, we can call that a probability distribution as well. The point is, if you never go beyond asserting that the former has the location of a brain and the time the brain is using it, and the latter has the location of a measuring apparatus and the time of the measurement, then you never need any nonlocality. The need for locality doesn't come from that, it comes from the claim that there exists a unique probability distribution for any hypothetical but possible measurement at any place and time, and that unique distribution is independent of everything going on outside its past light cone. That's all you need to jettison to restore the proper locality of everything-- get rid of the "propagating influences" requirement.
BTW I wasn't talking about Bell's theorem.
We can just as easily talk about white dwarfs, everything I said above applies there also.
 
  • #67
But, while the hypothesized probability distribution exists only abstractly, the measurement choices and results are real and localized. And the observed correlation frequencies do change instantaneously on one side when changing the measurement choices on the other side.
 
  • #68
ddd123 said:
But, while the hypothesized probability distribution exists only abstractly, the measurement choices and results are real and localized. And the observed correlation frequencies do change instantaneously on one side when changing the measurement choices on the other side.
No-- when a system is viewed as a mapping between a preparation and any set of possible measurements, that automatically includes all their correlations. They are all set by the preparation, so the correlations never "change". What is a change in a correlation?
 
  • #69
Is there an article/book where this is looked at in detail? Or, does this position you're explaining have a name?
 
  • #70
Ken G said:
What is a change in a correlation?
Like the delayed choice quantum eraser?
 
  • #71
ddd123 said:
Is there an article/book where this is looked at in detail? Or, does this position you're explaining have a name?
I don't know if there is a name for it, or if anyone else has said it first. I only know what I can show: you don't get Bell's theorem if you don't adopt local realism. That means any approach that rejects the assumption that a system comes compete with a set of unique probabilities for each set of observations that could be done on it, including the assumption that those probabilities are independent of each other, suffices to make that theorem irrelevant. For example, see http://drchinese.com/David/Bell_Theorem_Easy_Math.htm, where the assumption I refer to is called "local hidden variables."

So in other words, we are talking about our options for not adopting local hidden variables. What I am critiquing is the philosophy that says if we don't have local hidden variables, we must have nonlocal hidden variables. I say drop the whole idea that we have hidden variables, meaning variables "hidden in the parts", where those variables determine unique probabilities for any observations on the parts, independent of any observations anywhere else, and can only be changed by "propagating influences." Instead, just say that a system is a preparation together with the mapping it produces, where that mapping is a map between any set of hypothetical observations you can name, and the associated set of probabilities. That's what quantum mechanics actually does, so my approach is a kind of "minimalist" philosophy applied to scientific interpretation.
 
  • #72
The fact that Bell's theorem only rules out (forward causal, one world, non-superdeterministic, etc...) local realism doesn't mean local non-realism with all the other "normal" assumptions is possible. You still have to prove that it is possible. Bell's is a no-go theorem, not a go theorem.

Also, earlier you said:

Ken G said:
But a more important issue is, I never said the theory doesn't violate locality, of course it does that. I said it doesn't need to violate it by application of a concept of "FTL or retrocausal influences."

But now you seem to be saying precisely that.
 
  • #73
jerromyjon said:
Like the delayed choice quantum eraser?
Explain how a delayed choice quantum eraser "changes" a correlation. The correlations are specified by the preparation. What correlations are is two-point comparisons among measurements. I'm saying that when you have the preparation, you have a concept of a "system", which is just a machine for connecting any set of measurements you can do with all the probability outcomes of those measurements, including their correlations. If you do different measurements, you are invoking different correlations from that set. Nothing "changes."
 
  • #74
ddd123 said:
The fact that Bell's theorem only rules out (forward causal, one world, non-superdeterministic, etc...) local realism doesn't mean local non-realism with all the other "normal" assumptions is possible. You still have to prove that it is possible. Bell's is a no-go theorem, not a go theorem.
True, but what I'm talking about is clearly a "go" theorem-- it is quantum mechanics. Are you saying quantum mechanics does not allow us to take a preparation, and provide a mapping between any set of measurements, and their probability distribution including all the two-point correlations? Of course that's just what quantum mechanics let's us do. Nothing I've said goes beyond that in any way-- that's the whole point, I am simply not requiring that a "system" acquires any elements at all that quantum mechanics does not empower us to equip our concept of a "system" with.
But now you seem to be saying precisely that.
Yes, I actually edited that statement shortly after saying it, because I realized it wasn't quite what I meant to say. Sorry for creating confusion. What I'm saying is, sometimes we see claims that quantum mechanics forces us to reject locality, but that's only if we insist on clinging to the hidden variables concept associated with what happens when you combine realism with a concept that a system is made of "real parts." The formally true statement, I'm sure you'll agree, is what all we know is we cannot have "local realism." I'm saying locality is a more general and valuable physical principle than the "made of real parts" version of realism, so when we recognize that we can't have local realism, we should keep the locality but jettison the version of realism that says a system is a sum of independent real parts. That picture is not very good with indistinguishability, and it is not very good with entanglement either. So junk it. You don't lose any quantum mechanics by junking that philosophy-- quantum mechanics doesn't need it. This is clear: all quantum mechanics does is tell you how to take a preparation and figure out how to associate a set of outcomes with a set of measurements, including not only probability distributions but also two-point correlations (and higher).
 
  • #75
Well, personally, I'm not for nonlocal realism because I don't like the LET assumption or non-forward causality. Basically my view is nonlocal nonrealist. I just can't grasp your argument: I don't know about that. Surely I am intellectually limited because otherwise I could acknowledge you or rebuttal further. If I could see a fully laid-out exposition on a paper, or a completely different explanation (maybe written by someone else, since your style doesn't seem to be able to convey the concept to me), that'd help.
 
  • #76
ddd123 said:
Well, personally, I'm not for nonlocal realism because I don't like the LET assumption or non-forward causality. Basically my view is nonlocal nonrealist.
What would you call an approach that says this:
A system is a physical setup that is established by a preparation and has the function of associating any hypothetical measurements we can describe with a set of probability distributions, including two-point correlations (and higher). Quantum mechanics provides us with the instructions for connecting the preparations to the probabilities.

To me, that description is all we need to use the system concept in science. I see that it does not necessarily invoke realism in the sense of independent parts owning their own unique probabilities that can only be changed by "influences", we only invoke that picture when it is actually working for us (which is, as I said, when the influences are subluminal and can themselves be observed by intercepting them), so that gets called "nonrealist." But it's the "realist" approach that seems unrealistic to me. In any event, what I just said never encounters any issue with nonlocality, either in terms of Bell's theorem in EPR systems, or in terms of fermionic indistinguishability in white dwarfs.
 
Last edited:
  • #77
Ken G said:
What would you call an approach that says this:
A system is a physical setup that is established by a preparation and has the function of associating any hypothetical measurements we can describe with a set of probability distributions, including two-point correlations (and higher). Quantum mechanics provides us with the instructions for connecting the preparations to the probabilities.

I don't know if that's local.

In any event, what I just said never encounters any issue with nonlocality, either in terms of Bell's theorem in EPR systems, or in terms of fermionic indistinguishability in white dwarfs.

It can still be nonlocal by hidden assumption.
 
  • #78
ddd123 said:
Well, personally, I'm not for nonlocal realism because I don't like the LET assumption or non-forward causality. Basically my view is nonlocal nonrealist. I just can't grasp your argument: I don't know about that. Surely I am intellectually limited because otherwise I could acknowledge you or rebuttal further. If I could see a fully laid-out exposition on a paper, or a completely different explanation (maybe written by someone else, since your style doesn't seem to be able to convey the concept to me), that'd help.
Chapter 6 of Quantum Theory Concepts and Methods by Asher Peres is well argued. http://www.fisica.net/quantica/Peres%20-%20Quantum%20Theory%20Concepts%20and%20Methods.pdf.
 
  • #79
ddd123 said:
I don't know if that's local.

It can still be nonlocal by hidden assumption.
The system could be called "nonlocal" in the sense that it is holistic, like a field can be regarded as holistic. But it's not the system that gets called nonlocal in a lot of places (including on this forum, which I was reacting to initially), it is the influences that do. That's what I object to-- if you regard the system I'm talking about as a "nonlocal system", I don't have any objection. I object to the phrase "nonlocal influences", because that implies we have a propagation of an influence that moves faster than light and even back in time. My approach has none of that.
 
  • #80
Ken G said:
The system could be called "nonlocal" in the sense that it is holistic, like a field can be regarded as holistic. But it's not the system that gets called nonlocal, it is the influences that do. That's what I object to-- if you regard the system I'm talking about as a "nonlocal system", I don't have any objection. I object to the phrase "nonlocal influences", because that implies we have a propagation of an influence that moves faster than light and even back in time. My approach has none of that.

What would you say about the book Mentz linked? Is that your position?
 
  • #81
ddd123 said:
What would you say about the book Mentz linked? Is that your position?
Peres argues that in EPR it is impossible to define an observer independent concept of 'before' and 'after.' Using this concept along with particle oriented ideas leads to non-understanding. There's a lot more than that in the chapter.
 
  • #82
ddd123 said:
What would you say about the book Mentz linked? Is that your position?
I haven't read the book, but from a few short snapshots, I'd say the author is quite careful not to add anything more than is necessary, so I think we are largely in agreement. For example, he says:
"A quantum system is a useful abstraction, which frequently appears in the literature, but does not really exist in nature. In general, a quantum system is defined by an equivalence class of preparations. (Recall that “preparations” and “tests” are the primitive notions of quantum theory. Their meaning is the set of instructions to be followed by an experimenter.)"

So there is no requirement that the system be comprised of "independent parts", that is no kind of fundamental attribute of a system in his approach, and that's what I am advocating as well. He goes on:
"We can then define a state as follows: A state is characterized by the probabilities of the various outcomes of every conceivable test."
So yes, that's what I'm talking about. It's the set of tests that matter, not the set of "real parts". The concept of a part is quite optional, it only works in some situations (though granted, quite a lot).
 
  • #83
Then I'll try to work through that chapter. Thanks both.
 
  • #84
Ken G said:
Explain how a delayed choice quantum eraser "changes" a correlation.
After a delay of several ns a random event occurs that effects the results of an entangled partner that already hit another screen and was detected to have which path information "preserved" showed a which path pattern. When the path detection was erased the interference pattern was observed.
Ken G said:
The correlations are specified by the preparation.
The correlations occur only when a random obfuscation occurs later.
 
  • #85
jerromyjon said:
After a delay of several ns a random event occurs that effects the results of an entangled partner that already hit another screen and was detected to have which path information "preserved" showed a which path pattern. When the path detection was erased the interference pattern was observed.

The correlations occur only when a random obfuscation occurs later.
There is no need to assert when the correlations occur, if we simply say that the system comprises of the preparation and all the probabilities associated with all the possible measurements. There is no "change", there is only a decision about what measurements to do. It makes no difference when those decisions were made, there are no changes-- unless we insist on believing in the concept of independent probabilities for the "parts." That's the same idea that gives us retrocausality, I say stick to only what we need, and both the idea that there are "influences", and also the idea that "correlations change", go away immediately.
 
  • #86
ddd123 said:
In the latter case, how do you consider it with respect to different inertial frames? In a frame where Bob measures first, it's Bob influencing FTL and Alice's side is being influenced, and vice-versa?
Yes, and the cute thing about QM is that it's all the same how you look at it. It's some initially random common property that gets changed to match/oppose the setting of the "first" detector, then at the "second" detector it gets changed again for the second particle with probability depending on the first value... Statistically it works out to be equivalent in either order.

If you insist on using deterministic simulation with a set value for the property even before the measurements, you do have different evolution depending on how you look at it, but still both possibilities are ok, so throw a coin or otherwise "predetermine" which one you want to use the same way you predetermined the initial hidden property. In the end it just doesn't matter.
 
  • #87
georgir said:
If you insist on using deterministic simulation with a set value for the property even before the measurements

Interesting you say deterministic. What's the relationship between hidden variables / realism (I reckon that they're the same) and determinism? Which one implies the other?
 
  • #88
ddd123 said:
Interesting you say deterministic. What's the relationship between hidden variables / realism (I reckon that they're the same) and determinism? Which one implies the other?
For any randomness in a model you can just as easily substitute a supposedly deterministic mechanism dependant on unknown "hidden variables". If you should is another question entirely - if they stay hidden forever, Occam's razor may have something to say about them.
[edit: got confused that we're in the other thread from which we got split earlier, and posted some nonsense bits originally. now removed]
 
  • #89
Ken G said:
There is no need to assert when the correlations occur, if we simply say that the system comprises of the preparation and all the probabilities associated with all the possible measurements.
In this case there is obvious reason and sufficient data recorded to determine when interference occurs and when classical trajectories occur, based on the geometry of the preparation alone.
Ken G said:
It makes no difference when those decisions were made, there are no changes-- unless we insist on believing in the concept of independent probabilities for the "parts."
How do you relate a detection at D0 which occurs first with particle or wave behavior depending on the entangled partner being detected later either in one slit or both?
 
  • #90
jerromyjon said:
In this case there is obvious reason and sufficient data recorded to determine when interference occurs and when classical trajectories occur, based on the geometry of the preparation alone.
Then please say when the correlations occurred, and how that means any correlation ever "changed" in a delayed choice experiment. It seems to me you are simply applying the philosophy of local realism plus nonlocal influences, but no data forces that interpretation upon us. My interpretation is that correlations do not exist until sufficient measurements have occurred that could demonstrate those correlations. Using that simple rule, they never "change" in any experiment, unless you do a different experiment, in which case it makes perfect sense it could produce different correlations. All we need to understand is why a different experiment produces different correlations, and quantum mechanics already does that for us, in a way that is perfectly consistent so does not "change."
How do you relate a detection at D0 which occurs first with particle or wave behavior depending on the entangled partner being detected later either in one slit or both?
It makes no difference to me at all when the detections occur. All physics does is connect a set of measurements to their expected probability distributions and correlations, where you first have to specify both the preparation and what measurements are being made. I don't care at all when the measurements get made, that plays no role in the physics of entanglement experiments, and affects our expectations for those outcomes not at all. That' simply a fact about these experiments. So all that is happening in delayed choice is, we go in with an expectation that the times when we do the measurements should matter, but we find out they don't, so we retrofit this idea that something had to "change" to get that to be true. But the blame is all on all our initial expectation that when we did the measurements should matter, even though we have no mechanism in mind by which it possibly could matter. We should simply recognize our initial expectation was wrong, and drop it. It's all a holdover from local realism, if we didn't go in with that philosophy in the first place, we wouldn't need to retrofit it.
 
Last edited:
  • #91
Ken G said:
fermionic indistinguishability in white dwarfs.

Ironically, Ken G, when I think of EPR I can sort of understand what you're saying, but the white dwarf example is holding me back. Actually, are you sure it is a valid example? If the information of an emptied state on one side doesn't travel at least light-like to the other side so that you can fill it, you risk having an overfilled star for some inertial frames (i.e. in some frames the emptying occurs after the filling).
 
  • #92
ddd123 said:
Ironically, Ken G, when I think of EPR I can sort of understand what you're saying, but the white dwarf example is holding me back. Actually, are you sure it is a valid example? If the information of an emptied state on one side doesn't travel at least light-like to the other side so that you can fill it, you risk having an overfilled star for some inertial frames (i.e. in some frames the emptying occurs after the filling).
What I mean about white dwarfs is that if you look at interactions between particles, say you want to understand the heat conduction, you find that electrons deep in the Fermi sea don't scatter at all-- because the state they would have to scatter into is already "occupied." That's the language we use to talk about what is happening, but it's not a very good language, because it is deeply steeped in a form of local realism that can lead us astray in other applications. We imagine that star is full of different electrons, each with their own momentum state, and we say that the one electron can't go into a momentum state where there is already another one, but the whole reason that can't happen is because the electrons don't have identities like that! So the language is internally inconsistent, though common and somewhat innocuous if not taken too literally.

What is actually true is that if you look at the combined wavefunction of all the electrons, there simply are not unique individual electron states-- you can decompose into a concept of individual electron states in a host of different ways, akin to choosing a different basis for a single-particle wavefunction. So it's just not true that there is "already an electron in that momentum state", that is merely one way to translate the combined wavefunction into language that sounds like it kind of makes sense, but should not be taken literally. For example, it should not be taken so literally as to imagine that when we try to scatter an electron and find we cannot, somehow that "other electron" that is "already in that momentum state" produces some "nonlocal influence" that "prevents" our electron from scattering. None of that language is supported by the quantum mechanics that is determined by the total wavefunction of all the electrons, which does not distinguish any individual electrons at all. The experiment that is trying to scatter an electron could determine properties like the momentum of whatever electron is being culled out in that way, and only then is there "that electron" with "that momentum", but no experiment is doing that for any "other individual electron", so we shouldn't even talk about each "other individual electron" like it was a real thing there. When you avoid that, you avoid the whole concept of "influences" between electrons, you simply don't think of the system as being comprised of individual independent electrons-- it is a whole system that contains some number of electrons, none of which are distinguished and none of which have individual momenta.
 
  • #94
Mentz114 said:
... Caution: All the wisdom says that one cannot exceed the classical limit with a simulation, so don't be too hopeful.
Mentz114,

I don't think we should be covering this subject in this thread (or any thread actually). The CHSH formula is subject to the same objections I had above for the listed page. There is not going to be a breakthrough here, you and I both know Bell is a limitation. If someone want to make assertions that violate generally accepted science, those should be published elsewhere rather than debated here.

The simulation is fine as a basic tutorial. It does not make any unusual claims. It is only what is being said here that I have an issue with.
 
  • #95
DrChinese said:
Mentz114,

I don't think we should be covering this subject in this thread (or any thread actually). The CHSH formula is subject to the same objections I had above for the listed page. There is not going to be a breakthrough here, you and I both know Bell is a limitation. If someone want to make assertions that violate generally accepted science, those should be published elsewhere rather than debated here.

The simulation is fine as a basic tutorial. It does not make any unusual claims. It is only what is being said here that I have an issue with.
I agree. What I posted should have been a private message.
 
  • #96
Mentz114 said:
I agree. What I posted should have been a private message.

I think that makes more sense to follow up on. Thanks.

-DrC
 
  • #97
Ken G said:
What I mean about white dwarfs is that if you look at interactions between particles, say you want to understand the heat conduction, you find that electrons deep in the Fermi sea don't scatter at all-- because the state they would have to scatter into is already "occupied." That's the language we use to talk about what is happening, but it's not a very good language, because it is deeply steeped in a form of local realism that can lead us astray in other applications. We imagine that star is full of different electrons, each with their own momentum state, and we say that the one electron can't go into a momentum state where there is already another one, but the whole reason that can't happen is because the electrons don't have identities like that! So the language is internally inconsistent, though common and somewhat innocuous if not taken too literally.

What is actually true is that if you look at the combined wavefunction of all the electrons, there simply are not unique individual electron states-- you can decompose into a concept of individual electron states in a host of different ways, akin to choosing a different basis for a single-particle wavefunction. So it's just not true that there is "already an electron in that momentum state", that is merely one way to translate the combined wavefunction into language that sounds like it kind of makes sense, but should not be taken literally. For example, it should not be taken so literally as to imagine that when we try to scatter an electron and find we cannot, somehow that "other electron" that is "already in that momentum state" produces some "nonlocal influence" that "prevents" our electron from scattering. None of that language is supported by the quantum mechanics that is determined by the total wavefunction of all the electrons, which does not distinguish any individual electrons at all. The experiment that is trying to scatter an electron could determine properties like the momentum of whatever electron is being culled out in that way, and only then is there "that electron" with "that momentum", but no experiment is doing that for any "other individual electron", so we shouldn't even talk about each "other individual electron" like it was a real thing there. When you avoid that, you avoid the whole concept of "influences" between electrons, you simply don't think of the system as being comprised of individual independent electrons-- it is a whole system that contains some number of electrons, none of which are distinguished and none of which have individual momenta.

Of course I agree with all this. But what would you say about my issue specifically? Surely, the information that a state has been emptied must travel timelike or lightlike to avoid Pauli principle violation for some inertial frames, in case someone wants to fill that state on the other side of the star. But then aren't we falling back to localized parts for a star that should be a holistic object?
 
  • #98
ddd123 said:
Surely, the information that a state has been emptied must travel timelike or lightlike to avoid Pauli principle violation for some inertial frames, in case someone wants to fill that state on the other side of the star. But then aren't we falling back to localized parts for a star that should be a holistic object?
It sounds like you are asking, if a cosmic ray enters a white dwarf and knocks an electron clear out from deep in the Fermi sea, what is the wavefunction of the white dwarf? If the interaction is very quick, the cosmic ray could establish very accurately the energy of the electron that emerges, so let's say the electron and cosmic ray both come out with a very definite energy. Then at first it will look a wavefunction with a missing momentum state, much like an atom with a hole deep in one of its energy levels. But the location from where the electron came from would be uncertain, perhaps anywhere in the star. The wavefunction will then evolve via the Schroedinger equation, which has some interaction terms that will eventually fill that hole and release some energy, perhaps a photon. But it will take a long time for that transition to happen, I would guess at least the light travel time across the star, but perhaps much longer. I expect the situation would be like ionizing an electron from a deep shell in an atom-- the light crossing time is much shorter than the timescale for a transition to actually occur, so the issue doesn't even come up.
 
  • Like
Likes ddd123
Back
Top