Von Neumann QM Rules Equivalent to Bohm?

  • #201
TrickyDicky said:
Well, it seems it means different things to different people, for instance in this thread vanhees71 identified Einstein causality with microcausality and thus the disagreemnet with atyy. For others it means the causal structure of Minkowski space. I was identifying it with local causality so I refer you to https://en.wikipedia.org/wiki/User:Tnorsen/Sandbox/Bell's_concept_of_local_causality

See this is why I dislike discussions about the foundations of QM, b/c it invariably goes in a circle and we end up redefining words. Post 102's definition of Einstein Causality, or Bell's local causality criteria given in the wiki article is a bit weasly. The problem, as correctly pointed out in 102, is that it presupposes notions of beables and classical probability theory. So I would call it Einstein Causality + classical concepts about what a state really *is* and how it is allowed to combine. Aspects experiment shows that this is wrong, but that just means that you have to give up one, but not necessarily both concepts.
 
  • Like
Likes martinbn
Physics news on Phys.org
  • #202
Haelfix said:
See this is why I dislike discussions about the foundations of QM, b/c it invariably goes in a circle and we end up redefining words. Post 102's definition of Einstein Causality, or Bell's local causality criteria given in the wiki article is a bit weasly. The problem, as correctly pointed out in 102, is that it presupposes notions of beables and classical probability theory. So I would call it Einstein Causality + classical concepts about what a state really *is* and how it is allowed to combine. Aspects experiment shows that this is wrong, but that just means that you have to give up one, but not necessarily both concepts.
Sorry but I fail to see what's weaselly about the definitions, and I don't find anything about beables in #102. I found #102 quite clear and right and know close to nothing about beables. It is a provable fact that "Einstein causality" is a concept often confusingly used but tend to think that it is the kind of thing that can be corrected by openly discussing what one means by it.
 
  • #203
The problem for instance in the wiki definition is that it starts writing down statements about classical probability. Statements where the logic goes like (A or B) = 1 or (A and B) = 0, allowing for operations where truth values commute around the place etc.

Thats very dangerous in quantum mechanics, and indeed is what historically tripped up John Bell when he was writing down his theorem. I mean even Bells theorem itself is absolutely trivial, and essentially a tautological statement within classical probability theory. But it shouldn't come as too much of a surprise to realize that quantum mechanics violates this, quite independently of the interpretations.

So you see, definitions where multiple different concepts are conflated at the same time are not necessarily very useful, and part of the reason why there might be confusion. So perhaps can we agree to just refer to that statement as "Einstein Causality + classical probability theory"?
 
  • Like
Likes martinbn
  • #204
Haelfix said:
The problem for instance in the wiki definition is that it starts writing down statements about classical probability. Statements where the logic goes like (A or B) = 1 or (A and B) = 0, allowing for operations where truth values commute around the place etc.

Thats very dangerous in quantum mechanics, and indeed is what historically tripped up John Bell when he was writing down his theorem. I mean even Bells theorem itself is absolutely trivial, and essentially a tautological statement within classical probability theory. But it shouldn't come as too much of a surprise to realize that quantum mechanics violates this, quite independently of the interpretations. So you see, definitions where multiple different concepts are conflated at the same time are not necessarily very useful, and part of the reason why there might be confusion.
Yes, this is an interesting point and I actually wrote a similar warning in some old post.
So perhaps can we agree to just refer to that statement as "Einstein Causality + classical probability theory"?
It actually includes what in terms of the 1964 Bell theorem was called local hidden variables, i.e. Locality+determinism.
 
  • #205
I refer in any case martinbn to #102 which he must have missed.
 
  • #206
TrickyDicky said:
Yes, this is an interesting point and I actually wrote a similar warning in some old post.
It actually includes what in terms of the 1964 Bell theorem was called local hidden variables, i.e. Locality+determinism.

As Bell explained, determinism is not an assumption behind his analysis. It's a conclusion.

Alice and Bob decide ahead of time to measure the spins of twin particles along the same axis. Suppose that Alice measures her particle before Bob measures his. Then at the moment that Alice gets her result, she knows, with 100% probability, what Bob's result will be. So from that point on, Bob's result is deterministic.
 
  • #207
stevendaryl said:
As Bell explained, determinism is not an assumption behind his analysis. It's a conclusion.

Alice and Bob decide ahead of time to measure the spins of twin particles along the same axis. Suppose that Alice measures her particle before Bob measures his. Then at the moment that Alice gets her result, she knows, with 100% probability, what Bob's result will be. So from that point on, Bob's result is deterministic.
Determinism in its more generic conception is not an assumption of Bell's analysis. It actually does assume something more restricted, a linear form of determinism.
 
  • #208
Haelfix said:
See this is why I dislike discussions about the foundations of QM, b/c it invariably goes in a circle and we end up redefining words. Post 102's definition of Einstein Causality, or Bell's local causality criteria given in the wiki article is a bit weasly. The problem, as correctly pointed out in 102, is that it presupposes notions of beables and classical probability theory. So I would call it Einstein Causality + classical concepts about what a state really *is* and how it is allowed to combine. Aspects experiment shows that this is wrong, but that just means that you have to give up one, but not necessarily both concepts.

Haelfix said:
The problem for instance in the wiki definition is that it starts writing down statements about classical probability. Statements where the logic goes like (A or B) = 1 or (A and B) = 0, allowing for operations where truth values commute around the place etc.

Thats very dangerous in quantum mechanics, and indeed is what historically tripped up John Bell when he was writing down his theorem. I mean even Bells theorem itself is absolutely trivial, and essentially a tautological statement within classical probability theory. But it shouldn't come as too much of a surprise to realize that quantum mechanics violates this, quite independently of the interpretations.

So you see, definitions where multiple different concepts are conflated at the same time are not necessarily very useful, and part of the reason why there might be confusion. So perhaps can we agree to just refer to that statement as "Einstein Causality + classical probability theory"?

I agree that Einstein causality, as defined for the Bell inequalities, does include classical probability.

There is also a second sense of locality that quantum mechanics does fulfill, which is "no faster than light transmission of classical information".

So there are two widely agreed upon definitions of locality, one of which is empty or violated by quantum mechanics, the other which is fulfilled by quantum mechanics.

Is there a third definition of locality that corresponds to "Einstein causality without classical probability?" Maybe, but then why call it Einstein causality, which is the causality of classical relativity? Some candidates are information causality http://arxiv.org/abs/1112.1142 and macroscopic locality http://arxiv.org/abs/1011.0246.
 
Last edited:
  • #209
Haelfix said:
See this is why I dislike discussions about the foundations of QM, b/c it invariably goes in a circle and we end up redefining words. Post 102's definition of Einstein Causality, or Bell's local causality criteria given in the wiki article is a bit weasly. The problem, as correctly pointed out in 102, is that it presupposes notions of beables and classical probability theory. So I would call it Einstein Causality + classical concepts about what a state really *is* and how it is allowed to combine. Aspects experiment shows that this is wrong, but that just means that you have to give up one, but not necessarily both concepts.
Haelfix said:
The problem for instance in the wiki definition is that it starts writing down statements about classical probability. Statements where the logic goes like (A or B) = 1 or (A and B) = 0, allowing for operations where truth values commute around the place etc.

Thats very dangerous in quantum mechanics, and indeed is what historically tripped up John Bell when he was writing down his theorem. I mean even Bells theorem itself is absolutely trivial, and essentially a tautological statement within classical probability theory. But it shouldn't come as too much of a surprise to realize that quantum mechanics violates this, quite independently of the interpretations.

So you see, definitions where multiple different concepts are conflated at the same time are not necessarily very useful, and part of the reason why there might be confusion. So perhaps can we agree to just refer to that statement as "Einstein Causality + classical probability theory"?

I commented on this in post #208. But here is another reason why I don't think your suggestion of "Einstein causality without classical probability" saves vanhees71's argument. Let us define "Einstein causality without classical probability" as quantum mechanics. Now, how do the predictions of quantum mechanics differ depending on whether collapse is physical or not?
 
  • #210
TrickyDicky said:
Physically it may be what's preserved by available energies but I've never seen the Poincare group defined as the identity component of its fixed origin subgroup(Lorentz), I'm pretty sure that is not what is meant when referring to the Poincare symmetry say in Weinberg's TQF vol. 1. Otherwise I don't see why one should additionally insist on the commutation condition and the cluster decomposition, since the proper orthocronous Lorentz group already preserves causality(preserves orientation both spatially and in time, unlike the Lorentz group).

Of course, that's precisely what's meant in Weinbergs book. In nature the discrete symmetries P, T, PT, and CP are all independently verified to be violated by the weak interaction. Due to the CPT theorem, valid for local microcausal QFTs, then also C must be violated. So what's the space-time symmetry of nature is the proper orthochronous Poinare group and not larger subgroups of O(1,3).
 
  • #211
vanhees71 said:
Of course, that's precisely what's meant in Weinbergs book. In nature the discrete symmetries P, T, PT, and CP are all independently verified to be violated by the weak interaction. Due to the CPT theorem, valid for local microcausal QFTs, then also C must be violated. So what's the space-time symmetry of nature is the proper orthochronous Poinare group and not larger subgroups of O(1,3).
But can you clarify why the microcausality condition must be added separately then?
And again, I know the weak interaction violates C, but I was trying to restrict the discussion to QED.
 
  • #212
martinbn said:
OT: Sorry for the side question, but what exactly is Einstein causality? It seems, from the posts above, that it is something different than the usual relativity theory causality that comes from no faster than light signals.
No, it is that faster than light is forbidden.

I prefer to use "Einstein causality" instead of "locality" because a theory with, say, a maximal speed of 10000000 c would still be a local theory, thus, using "locality" instead of "Einstein causality" is suggesting much more horrible consequences if one rejects it. It is one thing to reject Einstein causality and to go back to classical causality connected with some hidden absolute time, where one can later find a new speed limit, so that everything would look like now only with a greater speed limit, and a completely different thing to "reject locality".
 
  • Like
Likes zonde
  • #213
Haelfix said:
So perhaps can we agree to just refer to that statement as "Einstein Causality + classical probability theory"?

I would not object to this. But, then, add Jaynes' concept that probability theory is only extended logic. ;-)
 
  • #214
atyy said:
I agree that Einstein causality, as defined for the Bell inequalities, does include classical probability.

There is also a second sense of locality that quantum mechanics does fulfill, which is "no faster than light transmission of classical information".

So there are two widely agreed upon definitions of locality, one of which is empty or violated by quantum mechanics, the other which is fulfilled by quantum mechanics.

Is there a third definition of locality that corresponds to "Einstein causality without classical probability?" Maybe, but then why call it Einstein causality, which is the causality of classical relativity? Some candidates are information causality http://arxiv.org/abs/1112.1142 and macroscopic locality http://arxiv.org/abs/1011.0246.
I don't know what you mean by "classical probability". Quantum theory provides probabilities, which obey the usual axioms (Kolmogorov) of probability theory as long as you apply it to experiments that make sense within the quantum-theoretical framework.

The linked-cluster theorem, valid for local microcausal relativistic QFTs, guarantees Einstein causality, i.e., no faster-than-light propagation of information (I also don't understand what you mean by classical information; for me information is what an observer can know about a system given its state; this information can be complete (if the system is known to be prepared in a "pure state", i.e., the statistical operator is a projection operator) or incomplete (then we describe it by a statistical operator which is not a projection operator, i.e., a "mixed state").

In other words, for our example, the locality and microcausality guarantees that when A measures the polarization of her photon, A knows immediately that B must find the opposite polarization, but B can only know it by either measuring it himself or getting the information from A, which needs a signal that travels, according to the relativistic space-time structure, at most at the speed of light. A knows B's photon's polarization from her knowledge about the initial state of the biphotons ("polarization singlet") and her local measurement of her photon's polarization.
 
  • #215
I think that at least partially the arguments come form "Einstein causality" =/= "no faster than light signalling" and the fact that some take it with "=" instead of "=/=".
 
  • #216
martinbn said:
I think that at least partially the arguments come form "Einstein causality" =/= "no faster than light signalling" and the fact that some take it with "=" instead of "=/=".
Of course, one has to distinguish "no faster than light signalling" and "no causal influence faster than light". The first holds in quantum theory, the second is violated, because there is no way to explain the violation of the BI in a causal way without FTL.

So, my #212 was sloppy, my point there was a different one.

I think "Einstein causality" should be clearly associated with "no causal influence FTL". First, for historical reasons, because, given the EPR argument, this was clearly Einstein's understanding of causality.

Second, there is, of course, the possibility to reject Reichenbach's principle of common cause, which is sufficient together with Einstein causality to prove the Bell inequalities. Now, causality without Reichenbach's principle of common cause is not worth to be named causality. In other words, if we reject Reichenbach's principle of common cause, we reject causality.

But "no faster than light signalling" is preserved. Thus, if one names this "Einstein causality", we obtain the paradox that to preserve Einstein causality we have to reject causality.
 
  • Like
Likes zonde
  • #217
vanhees71 said:
I don't know what you mean by "classical probability". Quantum theory provides probabilities, which obey the usual axioms (Kolmogorov) of probability theory as long as you apply it to experiments that make sense within the quantum-theoretical framework.

The linked-cluster theorem, valid for local microcausal relativistic QFTs, guarantees Einstein causality, i.e., no faster-than-light propagation of information (I also don't understand what you mean by classical information; for me information is what an observer can know about a system given its state; this information can be complete (if the system is known to be prepared in a "pure state", i.e., the statistical operator is a projection operator) or incomplete (then we describe it by a statistical operator which is not a projection operator, i.e., a "mixed state").

In other words, for our example, the locality and microcausality guarantees that when A measures the polarization of her photon, A knows immediately that B must find the opposite polarization, but B can only know it by either measuring it himself or getting the information from A, which needs a signal that travels, according to the relativistic space-time structure, at most at the speed of light. A knows B's photon's polarization from her knowledge about the initial state of the biphotons ("polarization singlet") and her local measurement of her photon's polarization.

Good - why didn't you make this clear earlier - I have stressed repeatedly that in my definition Einstein causality is not "no faster than light transfer of classical information".

I'm pretty sure your definition is not EPR's, but it doesn't matter since that's debatable.

But there is still a problem - how does physical collapse violate your definition of Einstein causality?
 
  • #218
So going back to the argument about collapse in QFT, my understanding is that in order to obtain finite results in renormalized perturbative QED calculations one must regularize, i.e. make the representation space provisionally discrete (finite number of degrees of freedom) until the proper limits are taken and continuous free fields are recovered.
 
  • #219
atyy said:
Good - why didn't you make this clear earlier - I have stressed repeatedly that in my definition Einstein causality is not "no faster than light transfer of classical information".

I'm pretty sure your definition is not EPR's, but it doesn't matter since that's debatable.

But there is still a problem - how does physical collapse violate your definition of Einstein causality?
Collapse, in our example, would violate Einstein causality, because it claims that A's measurement is the cause of B's finding, and that's not the case. There is no causal connection between A's and B's measurements if the measurement events (the click of the photon detector after the polarization foil) are space-like separated by definition, because otherwise the relativistic space-time description would be flawed. It may well be that this is the case, and one needs a more refined space-time model, but at least these Bell experiments only provide a case for such a violation of Einstein causality, if you interpret the "collapse" as a cause for B's measurement. As I think, is pretty clear from the meaning of "states" in QT, that's not the case here, because the cause for the correlations is not A's measurement of her photon's polarization but it was inherent for the whole time due to the creation of the photon pair by parametric down conversion. Thus, although the single-photon polarizations are maximally uncertain (in the sense of von Neumann entropy) the 100% correlation between the outcome of A's and B's measurement is there from the very beginning. This is very unintuitive for our everyday-experience trained minds, but it's the only way to make relativistic QT consistent with the underlying space-time structure, which has proven very successful given the fact that a lot of the specific structure of relativistic QFTs follows from Poincare invariance (symmetry under proper orthochronous Poincare transformations), among other things the necessity for massless vector bosons to be described by gauge theories.
 
  • #220
Haelfix said:
See this is why I dislike discussions about the foundations of QM, b/c it invariably goes in a circle and we end up redefining words. Post 102's definition of Einstein Causality, or Bell's local causality criteria given in the wiki article is a bit weasly. The problem, as correctly pointed out in 102, is that it presupposes notions of beables and classical probability theory. So I would call it Einstein Causality + classical concepts about what a state really *is* and how it is allowed to combine. Aspects experiment shows that this is wrong, but that just means that you have to give up one, but not necessarily both concepts.
Giving up beables and classical probability theory won't save you. You can check this by examining this informal proof of Bell inequality:
https://www.physicsforums.com/showthread.php?p=2817138#post2817138
It simply describes hypothetical but rather feasible experimental situation with a bit of "what if" type of reasoning.
 
  • #221
vanhees71 said:
Collapse, in our example, would violate Einstein causality, because it claims that A's measurement is the cause of B's finding, and that's not the case. There is no causal connection between A's and B's measurements if the measurement events (the click of the photon detector after the polarization foil) are space-like separated by definition, because otherwise the relativistic space-time description would be flawed. It may well be that this is the case, and one needs a more refined space-time model, but at least these Bell experiments only provide a case for such a violation of Einstein causality, if you interpret the "collapse" as a cause for B's measurement. As I think, is pretty clear from the meaning of "states" in QT, that's not the case here, because the cause for the correlations is not A's measurement of her photon's polarization but it was inherent for the whole time due to the creation of the photon pair by parametric down conversion. Thus, although the single-photon polarizations are maximally uncertain (in the sense of von Neumann entropy) the 100% correlation between the outcome of A's and B's measurement is there from the very beginning. This is very unintuitive for our everyday-experience trained minds, but it's the only way to make relativistic QT consistent with the underlying space-time structure, which has proven very successful given the fact that a lot of the specific structure of relativistic QFTs follows from Poincare invariance (symmetry under proper orthochronous Poincare transformations), among other things the necessity for massless vector bosons to be described by gauge theories.

So I think we still disagree. As far as I can tell, whether the collapse is physical or not, all predictions of the theory are the same - the only predictions are the measurement outcomes - those are the "classical outcomes" and the "classical information". If collapse is not physical, and there is no faster than light transmission of classical information, how can a physical collapse result in faster than light transmission of classical information, given that there is no difference in predictions whether collapse is physical or not?
 
  • #222
vanhees71 said:
There is no causal connection between A's and B's measurements if the measurement events (the click of the photon detector after the polarization foil) are space-like separated by definition, because otherwise the relativistic space-time description would be flawed. It may well be that this is the case, and one needs a more refined space-time model, but at least these Bell experiments only provide a case for such a violation of Einstein causality, if you interpret the "collapse" as a cause for B's measurement. As I think, is pretty clear from the meaning of "states" in QT, that's not the case here, because the cause for the correlations is not A's measurement of her photon's polarization but it was inherent for the whole time due to the creation of the photon pair by parametric down conversion. Thus, although the single-photon polarizations are maximally uncertain (in the sense of von Neumann entropy) the 100% correlation between the outcome of A's and B's measurement is there from the very beginning. This is very unintuitive for our everyday-experience trained minds, but it's the only way to make relativistic QT consistent with the underlying space-time structure, which has proven very successful given the fact that a lot of the specific structure of relativistic QFTs follows from Poincare invariance (symmetry under proper orthochronous Poincare transformations), among other things the necessity for massless vector bosons to be described by gauge theories.
But there is no way this type of reasoning can violate Bell inequality. You understand that, right? Or no?
 
  • #223
atyy said:
So I think we still disagree. As far as I can tell, whether the collapse is physical or not, all predictions of the theory are the same - the only predictions are the measurement outcomes - those are the "classical outcomes" and the "classical information". If collapse is not physical, and there is no faster than light transmission of classical information, how can a physical collapse result in faster than light transmission of classical information, given that there is no difference in predictions whether collapse is physical or not?
Yes, simply leave out the collapse, and take what the formalism predicts, and everything is totally unproblematic. If the collapse is no physical process, then you don't need it!
 
  • #224
zonde said:
But there is no way this type of reasoning can violate Bell inequality. You understand that, right? Or no?
Of course, if you adjust the filters at A's and B's place adequately, Bell's inequalities are violated. Also this is caused by the initial preparation of the biphoton and not by either A's or B's measurement. See here

https://en.wikipedia.org/wiki/Bell'...re_violated_by_quantum_mechanical_predictions

Instead of "electron" and "spin" you can read "photon" and "polarization". The math is precisely the same.
 
  • #225
vanhees71 said:
Of course, if you adjust the filters at A's and B's place adequately, Bell's inequalities are violated. Also this is caused by the initial preparation of the biphoton and not by either A's or B's measurement.
It's clear that QM predictions violate Bell's inequality. And no, you can't violate Bell's inequalities just by the initial preparation of the biphoton.

And again you can't violate Bell's inequality with this type of reasoning:
vanhees71 said:
Thus, although the single-photon polarizations are maximally uncertain (in the sense of von Neumann entropy) the 100% correlation between the outcome of A's and B's measurement is there from the very beginning.
 
  • #226
I don't understand this argument, because what's written in the above quoted Wikipedia article paragraph is calculated by taking the expectation values, defining the correlation measures that violate Bell's inequality, with respect to the singlet state ##|\phi \rangle##, which is the one prepared in the very beginning (in our case of photons by parametric down conversion).
 
  • #227
vanhees71 said:
I don't understand this argument, because what's written in the above quoted Wikipedia article paragraph is calculated by taking the expectation values, defining the correlation measures that violate Bell's inequality, with respect to the singlet state ##|\phi \rangle##, which is the one prepared in the very beginning (in our case of photons by parametric down conversion).
Well, take a look at his link that already gave in #220:
https://www.physicsforums.com/showthread.php?p=2817138#post2817138
 
  • #228
vanhees71 said:
Yes, simply leave out the collapse, and take what the formalism predicts, and everything is totally unproblematic. If the collapse is no physical process, then you don't need it!

Whether or not collapse is a physical process, the quantum formalism does have it.

But you still haven't shown that collapse as a physical process violates your definition of Einstein causality. How does physical collapse lead to faster than light signalling of classical information?
 
  • #229
atyy said:
Whether or not collapse is a physical process, the quantum formalism does have it.
True. And therefore the solution must be formal, mathematical. Not metaphisical or interpretational.
 
  • #230
It's purely interpretational. It's even not related to QT but to any probabilistic statement. For me, collapse has a very specific meaning and is part of the interpretation of some branch of the Copenhagen interpretations. It claims that, if I make a precise measurement of an observable on an system, its state instantaneously "collapses" to the corresponding eigenstate of the self-adjoint operator representing the observable. This implies that for a delocalized system something physical instantaneously changes everywhere due to a measurement through a local interaction of the system with the measurement apparatus, which clearly violates Einstein causality. There is no experimental hint whatsoever that this is true, and it is righteously crticized in the famous EPR paper. Now, nowhere in the formalism you need such an interpretation to apply it to real-world observations. I have this argumented repeate often enough in this thread, and can't say anything new about it. I think there's no doubt about the formalism and what's predicted.
 
  • #231
vanhees71 said:
It's purely interpretational. It's even not related to QT but to any probabilistic statement. For me, collapse has a very specific meaning and is part of the interpretation of some branch of the Copenhagen interpretations. It claims that, if I make a precise measurement of an observable on an system, its state instantaneously "collapses" to the corresponding eigenstate of the self-adjoint operator representing the observable. This implies that for a delocalized system something physical instantaneously changes everywhere due to a measurement through a local interaction of the system with the measurement apparatus, which clearly violates Einstein causality. There is no experimental hint whatsoever that this is true, and it is righteously crticized in the famous EPR paper. Now, nowhere in the formalism you need such an interpretation to apply it to real-world observations. I have this argumented repeate often enough in this thread, and can't say anything new about it. I think there's no doubt about the formalism and what's predicted.
Fair enough. But I think it is important to make explicit that the source of disagreement here lies in the refusal to make the distinction between Einstein causality and microcausality. The standard QFT textbooks fail to make this distinction so I can understand why for vanhees71 Einstein causality is microcausality and he can never admit its violation. And it is in his logic not to be willing to allow for the distinction between Einstein causality and microcausality, since atyy's notion of Einstein causality which IMO is equivalent to Bell's 1976 notion of "local causality" cannot be violated without breaking Poincare(proper orthochronous) invariance. This vanhees71 (nor most physicists) is not ready to dispose of, so it makes sense he is not willing to even make the distinction. The logic behind this is that there is no solid theory so far that combines loss of Poincare invariance with no ftl signaling.
But quantum experiments are obstinate and show local causality to be violated. Now local causality is a weaker notion than the more usual "local hidden variables" and cannot be separated into a local and a classical deterministic part like the latter giving the chance to save either locality or classical determinism and originating confusion and debate by conflating the word local in the first the second notions derived from the 1976 and the 1964 versions of Bell's theorem as explained in Wiseman's "The two Bell's theorems of John Bell" paper.
 
  • #232
vanhees71 said:
It's purely interpretational. It's even not related to QT but to any probabilistic statement. For me, collapse has a very specific meaning and is part of the interpretation of some branch of the Copenhagen interpretations. It claims that, if I make a precise measurement of an observable on an system, its state instantaneously "collapses" to the corresponding eigenstate of the self-adjoint operator representing the observable. This implies that for a delocalized system something physical instantaneously changes everywhere due to a measurement through a local interaction of the system with the measurement apparatus, which clearly violates Einstein causality. There is no experimental hint whatsoever that this is true, and it is righteously crticized in the famous EPR paper. Now, nowhere in the formalism you need such an interpretation to apply it to real-world observations. I have this argumented repeate often enough in this thread, and can't say anything new about it. I think there's no doubt about the formalism and what's predicted.

I don't believe you are being consistent in terminology. Your argument is wrong because you are switching back and forth between two different definitions of Einstein causality.

If you define (A) Einstein causality = no faster than light communication, then since physical collapse gives identical predictions to non-physical collapse, physical collapse also does not allow faster than light communication. Physical collapse does not violate this notion of Einstein causality, if it makes the same predictions as non-physical collapse. This notion of Einstein causality is what is used when people talk about spacelike-separated operators commuting.

If you define (B) Einstein causality = classical relativistic causality, then physical collapse clearly violates this. But the Bell theorem shows that any theory reproducing the predictions of quantum mechanics will violate classical relativistic causality, so whether collapse is physical or not is irrelevant, since classical relativistic causality is either empty or gone. Usually when people talk about EPR, this is the definition of Einstein causality they use.

When you say "It's purely interpretational", you are using definition (A), in which there are no differences in predictions. In this definition of Einstein causality, physical collapse does not violate it.

You are using definition (B) when you say "This implies that for a delocalized system something physical instantaneously changes everywhere due to a measurement through a local interaction of the system with the measurement apparatus, which clearly violates Einstein causality. There is no experimental hint whatsoever that this is true, and it is righteously crticized in the famous EPR paper." In this definition of Einstein causality, physical collapse does violate it, but so does non-physical collapse.
 
Last edited:
  • #233
TrickyDicky said:
But quantum experiments are obstinate and show local causality to be violated.
No they don't. Or not yet, if you like it that way.
 
  • #234
TrickyDicky said:
But can you clarify why the microcausality condition must be added separately then?
Just not to let this hanging here, it must be added obviously because of the Poincare group translations, I must have got fixated on the Lorentz group for some reason.
 
  • #235
zonde said:
No they don't. Or not yet, if you like it that way.
Are you referring to the practical issue of loopholes or to something deeper?
 
  • #236
TrickyDicky said:
Are you referring to the practical issue of loopholes or to something deeper?
Probably I am referring to the things that fall under your category of "practical issue of loopholes".
But the word "practical" is a bit confusing. While it is a practical issue to make loophole free experimental setup, loopholes themselves are not practical issues but rather placeholders for different possible fundamental phenomena like systematic effect of feedback or contextuality of measurement.
 
Back
Top