Can we violate Bell inequalities by giving up CFD?

zonde
Gold Member
Messages
2,960
Reaction score
224
bhobba said:
Entanglement has nothing to do with anything like that - its simply applying the principle of superposition to systems. I gave a very careful explanation before - its really all there is to it. Nothing weird in the sense of being mystical etc etc is going on - it simply leads to a different type of correlation than occurs classically. The difference is classically you know it has properties all the time ie the green and red slips of paper are always green and red. In QM its more subtle as Bells theorem shows - but it's still just a correlation - its not some phenomena that needs further explanation. We know its explanation - systems can be in superposition and hence are correlated in a way different to classical correlations.
Haelfix said:
But at the end of the day, as long as you give up realism (counterfactual definitiveness to use the philosophical lingo) and simply accept that we don't have bits, but instead we have qubits, there is absolutely nothing bizarre about Bells inequalities being violated.
I quoted these post from other thread. I don't want to distract discussion in other thread so I'm starting a new one about statements in these posts.

Basically the question is if we can violate Bell inequalities by two separated but correlated systems that can be as non-classical as we like (as long as we can speak about paired "clicks in detectors") i.e. if we give up counter factual definiteness (CFD) but keep locality.
Bhobba and Haelfix are making bold claim that this can be done. But this is just handwaving. So I would like to ask to demonstrate this based on model. Say how using correlated qubits at two spacelike separated places can lead to violation of Bell inequalities in paired detector "clicks"?

There is example of very simple model that could be used as baseline:
https://www.physicsforums.com/showthread.php?p=2817138#post2817138
 
Physics news on Phys.org
zonde said:
Bhobba and Haelfix are making bold claim that this can be done

There is nothing bold about it - its bog standard QM.

You want a specific model - well here is one (see post 137):
https://www.physicsforums.com/threads/the-born-rule-in-many-worlds.763139/page-7

Its based on the following axiom:
'An observation/measurement with possible outcomes i = 1, 2, 3 ... is described by a POVM Ei such that the probability of outcome i is determined by Ei, and only by Ei, in particular it does not depend on what POVM it is part of.'

Note - it EXPLICTLY bases QM on observations and not on things with properties independent of observations. To be even clearer - it denies counter-factual definiteness.

Thanks
Bill
 
It depends on what one means by locality and counterfactual definiteness (determinsim).

If by locality, one means local causality or classical relativistic causality, then no it is not possible to keep locality by giving up counterfactual definiteness.

If by locality, one means no superluminal signalling, then yes, it is possible to keep locality by giving up counterfactual definiteness.

Eg. http://arxiv.org/abs/1503.06413 - the interpretation of history in this paper may be controversial, but the physics should be correct.
 
I have great difficulty seeing the relevance of counterfactual definiteness.

Counterfactual Definiteness regards the results of measurements that are not made, but isn't any bell type experiment about a repetition of, two specially separated measurements that are made.

I imagine the way out is that the spatial separation has to be closed before results can be compared and correlation found, but to me, that saves locality in a similar way to supper determinism saves locality. Ether case feels like moving the location where the mathematics occurs when the maths itself may be considered to contain non-locality. To look at it for at another angle, it seems to me that giving up counterfactual definiteness in this manor is to make certain definitions of locality not relevant.

I think I just realized that some definitions of locality may imply a certain amount of counterfactual definiteness.

The choice between reality (or counterfactual definiteness) and locality isn't meaningful to me.

I find it easier to think of a choice between locality or singular outcomes.
 
lukesfn said:
I have great difficulty seeing the relevance of counterfactual definiteness.
My understanding of how CFD is relevant is as follows. I'm happy to be corrected on this, as my understanding is very provisional and I'm putting this out there to see if I've got it right.

The Bell inequalities imply that a measurement made on one particle in some sense has an impact on the result from a subsequent measurement made on its entangled twin. That impact exists regardless of whether the two measurement events are timelike or spacelike separated. If the latter is the case then we cannot say that one 'caused' the other without giving up Locality. If we don't want to do that then one alternative is to assume that the entangled twins, by some unknown means, 'agreed between themselves' at the time they were entangled (when they would have been timelike separated) on the values they would give when the relevant measurements were made later on. Each one then carries with it that value as a hidden variable. But that can only work if it is certain at the time of entanglement that those two measurements will be made. So we must assume that the experimenter has no choice but to make the measurements that she does - in fact that what measurements she will make, and when, is already determined at the time of entanglement and cannot change. For this reason, rejecting CFD is sometimes called 'Super-Determinism'.

Under the 'rejecting-CFD' approach, the information about what is measured does not travel from the event of one measurement to another - which would require superluminal communication - but from the entanglement event to the two measurement events - both of which paths are timelike.
 
lukesfn said:
I have great difficulty seeing the relevance of counterfactual definiteness.

See the following paper:
http://www.johnboccio.com/research/quantum/notes/paper.pdf

Thanks
Bill
 
lukesfn said:
I think I just realized that some definitions of locality may imply a certain amount of counterfactual definiteness.

Yes. Local causality or classical relativistic causality or local explainability requires realism, so it is not possible to save locality by giving up realism.

On the other hand, if one defines locality as "no faster than light signalling of classical information", then we are not seeking realism, rather predictability. Bell's theorem says we can retain this type of locality if we give up predictability.
 
bhobba said:
There is nothing bold about it - its bog standard QM.

You want a specific model - well here is one (see post 137):
https://www.physicsforums.com/threads/the-born-rule-in-many-worlds.763139/page-7

Its based on the following axiom:
'An observation/measurement with possible outcomes i = 1, 2, 3 ... is described by a POVM Ei such that the probability of outcome i is determined by Ei, and only by Ei, in particular it does not depend on what POVM it is part of.'

Note - it EXPLICTLY bases QM on observations and not on things with properties independent of observations. To be even clearer - it denies counter-factual definiteness.
You gave model for observation. Good. But observation acts on state. And the problem here is that entangled state in bog standard QM is nonlocal (distance is ignored) i.e. it is single mathematical object for two possibly spacelike separated quantum systems.

So can you split entangled state in two mathematical objects so that two observations each acts on it's own mathematical object? This certainly is not bog standard QM.
 
  • #10
zonde said:
You gave model for observation. Good. But observation acts on state. And the problem here is that entangled state in bog standard QM is nonlocal (distance is ignored) i.e. it is single mathematical object for two possibly spacelike separated quantum systems.

There is your problem right from the start. Ascribing the property of distance between particles without reference to an observation. I specifically stated only observations were relevant.

zonde said:
So can you split entangled state in two mathematical objects so that two observations each acts on it's own mathematical object? This certainly is not bog standard QM.

I am afraid it is. Its very basic to QM which suggests your issues may stem from not having studied a good book on it.

I presume you are referring to a partial trace which is a well known QM process:
http://physics.stackexchange.com/qu...ake-the-partial-trace-to-describe-a-subsystem

All its doing is in an entangled system observing just one system. This is perfectly valid and implemented by the observable AxI if you are just observing system A.

Thanks
Bill
 
Last edited:
  • #11
atyy said:
Eg. http://arxiv.org/abs/1503.06413 - the interpretation of history in this paper may be controversial, but the physics should be correct.
Interesting paper. I started to read it. Thanks.

atyy said:
If by locality, one means no superluminal signalling, then yes, it is possible to keep locality by giving up counterfactual definiteness.
Granted, let's by locality mean no superluminal signalling. How do you model perfect correlations between paired detections of entangled state when matching measurement settings are used? Without superluminal signalling between two distant places.
 
  • #12
bhobba said:
There is your problem right from the start. Ascribing the property of distance between particles without reference to an observation. I specifically stated only observations were relevant.
No, I am ascribing distance to two detection events (observations). Reference to two distant "quantum systems" here is just a placeholder for whatever principle we use to pair up two distant detection events.
 
  • #13
zonde said:
How do you model perfect correlations between paired detections of entangled state when matching measurement settings are used? Without superluminal signalling between two distant places.

By not reading more into it than the formalism. All the formalism predicts is a correlation.

Thanks
Bill
 
  • #14
zonde said:
No, I am ascribing distance to two detection events (observations). Reference to two distant "quantum systems" here is just a placeholder for whatever principle we use to pair up two distant detection events.

Then your comment its non-local because distance is ignored makes no sense. All that's happening is the correlation exists regardless of distance - its not its non-local any more that the example I gave in another thread with red and green slips is non-local or Bertemans Socks is non local. It makes no difference how far the slips, or Bertlemans feet, are, that's all.

All that's happening is if you get something at one detector you must get something else at the other detector - its simply a correlation - by the very definition of correlation.

Thanks
Bill
 
Last edited:
  • #15
zonde said:
Granted, let's by locality mean no superluminal signalling. How do you model perfect correlations between paired detections of entangled state when matching measurement settings are used? Without superluminal signalling between two distant places.

For this definition of locality, the more correct thing to give up is predictability. So what Bell's theorem forbids if the inequalities are violated are no superluminal signalling and a predictable outcome. We know that we can preserve no superluminal signalling if we allow unpredictable outcomes, since quantum mechanics is such a theory.
 
  • #16
atyy said:
For this definition of locality, the more correct thing to give up is predictability. So what Bell's theorem forbids if the inequalities are violated are no superluminal signalling and a predictable outcome. We know that we can preserve no superluminal signalling if we allow unpredictable outcomes, since quantum mechanics is such a theory.

http://arxiv.org/pdf/1503.06413v1.pdf
That would be the 'Operationalist Camp' page 9 : (4) The Two Camps
 
  • Like
Likes atyy
  • #17
zonde said:
Interesting paper. I started to read it. Thanks.Granted, let's by locality mean no superluminal signalling. How do you model perfect correlations between paired detections of entangled state when matching measurement settings are used? Without superluminal signalling between two distant places.
Again , the reference to @ vanhess71 interpretation I made in post #4 above addresses and answers this question. Ie non local correlations
 
  • #18
bhobba said:
Then your comment its non-local because distance is ignored makes no sense. All that's happening is the correlation exists regardless of distance - its not its non-local any more that the example I gave in another thread with red and green slips is non-local or Bertemans Socks is non local. It makes no difference how far the slips, or Bertlemans feet, are, that's all.

All that's happening is if you get something at one detector you must get something else at the other detector - its simply a correlation - by the very definition of correlation.

I would say that a Bertlemann's Socks-type correlation is certainly a nonlocal correlation: it's a correlation between distant variables. You can say that this correlation isn't really nonlocal, because it can be explained in terms of local correlations involving hidden variables. Sock color is the hidden variable; if you assume that a sock has a color even before you look at it, then the Bertlemann's Socks correlation can be explained as resulting from averaging over all possible sock colors.

So I would disagree; I would say that the correlations themselves are nonlocal, in a mathematical sense, in both the EPR case and the Bertlemann's Socks case, the difference being whether the nonlocal correlations can be understood as being "implemented" by local correlations.
 
  • #19
stevendaryl said:
I would say that a Bertlemann's Socks-type correlation is certainly a nonlocal correlation: it's a correlation between distant variables.

Sure - depending on your definition of locality. It wouldn't be mine though because I preclude correlations from locality.

Thanks
Bill
 
  • #20
I think this is just a semantic argument. Bell's theorem undoubtedly proves that some experimental facts can't be explained by a classical relativistically covariant hidden variable theory. It is also beyond any doubt that these experiments are explained by perfectly relativistically covariant quantum theories. It's important to note that Bell's theorem neither invalidates convential quantum theory nor special relativity (In particular, it doesn't imply the need for an ether or preferred reference frames or any such things). It is a matter of fact that the consistency of special relativity doesn't require Bell's locality criterion (or Einstein causality or local causality or whatever we may call it), so the violation of Bell's inequality doesn't cause any problems. Whether we call QM non-local or not is therefore nothing but semantics.

(As a side note: Also note that the violation isn't caused by the collapse of the wave-function. The probabilities that violate Bell's inequality are derived from the uncollapsed state. So the denial of collapse doesn't cure the violation of the inequality.)
 
  • Like
Likes bhobba
  • #21
This is a Bertlemann's Socks correlation model in accord with non - local correlations:
The two drawers ( A and B aligned detectors) space like or time like separated contain socks that are in maximally uncertain states of colors red and blue.
While still maintaining 100% correlation between the outcome of A's and B's measurement .It is there from the very beginning. This type of entanglement enables correlations between far distant events for the socks/photons without predetermination of the measured observable.
|φ} = 1/√2 ( | HV } - | VH } ) Blue for V Red for H .
Reference Local QFT. vanhess71
Related question: Does this minimal interpretation allow for Bell inequality violations when A and B detectors are not aligned ?
 
Last edited:
  • #22
rubi said:
I think this is just a semantic argument. Bell's theorem undoubtedly proves that some experimental facts can't be explained by a classical relativistically covariant hidden variable theory. It is also beyond any doubt that these experiments are explained by perfectly relativistically covariant quantum theories. It's important to note that Bell's theorem neither invalidates convential quantum theory nor special relativity (In particular, it doesn't imply the need for an ether or preferred reference frames or any such things). It is a matter of fact that the consistency of special relativity doesn't require Bell's locality criterion (or Einstein causality or local causality or whatever we may call it), so the violation of Bell's inequality doesn't cause any problems. Whether we call QM non-local or not is therefore nothing but semantics.

(As a side note: Also note that the violation isn't caused by the collapse of the wave-function. The probabilities that violate Bell's inequality are derived from the uncollapsed state. So the denial of collapse doesn't cure the violation of the inequality.)

The important point is that in the sense in which quantum theory "explains" the experiments, quantum theory is nonlocal. Usually, we say that quantum theory does not explain the experiments, because quantum theory is not about what reality is, but what we can predict about reality. But many, including some of the greatest physicists like Landau & Lifshitz, Dirac and Weinberg, also say that quantum mechanics has a measurement problem, which indicates its incompleteness, and suggests a more fundamental theory of reality. What Bell's theorem says is that the reality underlying quantum mechanics is nonlocal (or retrocausal or superdeterministic or many-worlds). If we take quantum mechanics to explain the experiments, then we are taking quantum mechanics to be about what reality is, in which case quantum mechanics is manifestly nonlocal (wave function collapse).

Regarding the side note: denial of collapse can avoid violating the inequality at spacelike separation. To deny the collapse, by pushing the measurement to the end of the experiment, one has to deny the existence of spacelike separated observers to avoid a preferred frame, in which case the inequality is not violated at spacelike separation.
 
  • #23
rubi said:
I think this is just a semantic argument.

It is. But for some reason people get confused about it. If you, like me, think locality doesn't apply to correlated systems then the issue is very simple. If not, then what happens is well known - you can't have both locality and counter-factual definiteness.

Thanks
Bill
 
  • Like
Likes rubi
  • #24
bhobba said:
Then your comment its non-local because distance is ignored makes no sense. All that's happening is the correlation exists regardless of distance - its not its non-local any more that the example I gave in another thread with red and green slips is non-local or Bertemans Socks is non local. It makes no difference how far the slips, or Bertlemans feet, are, that's all.
Bertlman socks type model is counterfactualy definite local model that can't create correlations that violate Bell inequality. Your idea was that by giving up CFD but keeping locality it is possible to come up with a model that can violate Bell inequalities. So I don't see that you provided any valid argument for your point.

bhobba said:
All that's happening is if you get something at one detector you must get something else at the other detector - its simply a correlation - by the very definition of correlation.
Well, one important point is that we should consider detection events as factually definite (we gave up only counterfactual definiteness). Then as we are considering local model we can talk about two independent factualy definite series of detection events. And with matching analyzer setings we get perfect predictability of detections in one detector given the other. So we are back to elements of reality as defined in EPR paradox.
So it's not "simply correlation".
 
  • Like
Likes Derek Potter
  • #25
atyy said:
The important point is that in the sense in which quantum theory "explains" the experiments, quantum theory is nonlocal. Usually, we say that quantum theory does not explain the experiments, because quantum theory is not about what reality is, but what we can predict about reality.
All physical theories are only about what we can predict about reality. If the world would behave as classical mechanics predicts, I would also consider it only a description. What the world "really is", will remain inaccessible to physicists forever. Bell's theorem only tells us that if we wanted to construct a classical theory that explains Aspect's results, then it would have to be a non-local theory. But would a non-local classical theory really be a better "explanation"? In my opinion, it would be just as mysterious as QM itself and in fact, I would consider it much more mysterious and philosophically much less satisfactory, since it would imply that nature is perfectly deterministic and I think that is really an unjustified prejudice about nature. I consider it much more realistic that there is really a certain inherent element of randomness that just has no deeper "explanation". (Here, "random" is a placeholder for everything that lays between the two extremes "random" and "deterministic" that are accessible to mathematics.) But one quickly dives into philosophical discussions here.

But many, including some of the greatest physicists like Landau & Lifshitz, Dirac and Weinberg, also say that quantum mechanics has a measurement problem, which indicates its incompleteness, and suggests a more fundamental theory of reality.
But since then, our knowledge about quantum theory has evolved a lot and if you consider the quantum state purely as a container of the available information, rather than something that corresponds to a "real" "thing" (after all, a physicist can only possibly collect information, but never possibly gain knowledge about "reality"), then decoherence together with Bayesian updating is a fully satisfactory resolution of the measurement problem. (The PBR theorem doesn't invalidate this view, since it assumes the existence of some underlying description.) It is only the wishful thinking that mathematics can completely describe every aspect of nature and that the mathematical entities of a physical theory must correspond exactly to some "real" "thing", rather than just be a tool that allows us to make predictions, that causes problems. But if I have on the one hand a theory that seems accounts for every aspect of the world up to any desired precision so far and on the other hand some wishful thinking that seems to be in disagreement with the theory, I would rather give up the wishful thinking than the reliable theory.

What Bell's theorem says is that the reality underlying quantum mechanics is nonlocal (or retrocausal or superdeterministic or many-worlds). If we take quantum mechanics to explain the experiments, then we are taking quantum mechanics to be about what reality is, in which case quantum mechanics is manifestly nonlocal (wave function collapse).
There needn't be a "reality" underlying quantum mechanics. It's perfectly possible that quantum mechanics will remain the last word. Bell's theorem tells us that any replacement of quantum mechanics by a classical theory that accounts for all the statistical features must be non-local. But there is no need for a replacement of quantum mechanics in the first place.

Regarding the side note: denial of collapse can avoid violating the inequality at spacelike separation. To deny the collapse, by pushing the measurement to the end of the experiment, one has to deny the existence of spacelike separated observers to avoid a preferred frame, in which case the inequality is not violated at spacelike separation.
I don't agree that pushing the measurement to the end of the experiment would require one to deny spacelike separation between the observers. The measurements are still performed by spacelike separated observers and the uncollapsed state predicts exactly all the statistical features, including the correlations that lead to the violation of Bell's inequality.
 
  • Like
Likes bhobba
  • #26
bhobba said:
See the following paper:
http://www.johnboccio.com/research/quantum/notes/paper.pdf
A quite bad paper, because it ignores that CFD is derived in Bell's proof. Bell uses locality and the EPR criterion of reality to derive CFD. Thus, all the hopes to "give up CFD" are meaningless, once one does not have to assume it, but can derive it.
 
  • Like
Likes zonde
  • #27
rubi said:
All physical theories are only about what we can predict about reality. If the world would behave as classical mechanics predicts, I would also consider it only a description. What the world "really is", will remain inaccessible to physicists forever. Bell's theorem only tells us that if we wanted to construct a classical theory that explains Aspect's results, then it would have to be a non-local theory. But would a non-local classical theory really be a better "explanation"? In my opinion, it would be just as mysterious as QM itself and in fact, I would consider it much more mysterious and philosophically much less satisfactory, since it would imply that nature is perfectly deterministic and I think that is really an unjustified prejudice about nature. I consider it much more realistic that there is really a certain inherent element of randomness that just has no deeper "explanation". (Here, "random" is a placeholder for everything that lays between the two extremes "random" and "deterministic" that are accessible to mathematics.) But one quickly dives into philosophical discussions here.

The measurement problem is not a question about randomness versus determinsim, not about personal likes and dislikes about locality and nonlocality. The measurement problem is that one has to have a classical observer who sits outside the quantum system. We believe the laws of physics also include the observer, but in the orthodox interpretation quantum mechanics cannot be a theory of everything, because the observer always stands apart.

rubi said:
But since then, our knowledge about quantum theory has evolved a lot and if you consider the quantum state purely as a container of the available information, rather than something that corresponds to a "real" "thing" (after all, a physicist can only possibly collect information, but never possibly gain knowledge about "reality"), then decoherence together with Bayesian updating is a fully satisfactory resolution of the measurement problem. (The PBR theorem doesn't invalidate this view, since it assumes the existence of some underlying description.) It is only the wishful thinking that mathematics can completely describe every aspect of nature and that the mathematical entities of a physical theory must correspond exactly to some "real" "thing", rather than just be a tool that allows us to make predictions, that causes problems. But if I have on the one hand a theory that seems accounts for every aspect of the world up to any desired precision so far and on the other hand some wishful thinking that seems to be in disagreement with the theory, I would rather give up the wishful thinking than the reliable theory.

Taking the collapse to be analogous to Bayesian updating doesn't solve the measurement problem, because the observer stands apart, or at the very least has to make the classical/quantum cut. The fundamental reason is that the wave function is not necessarily real. However, we believe the measurement outcomes are real. So the observer must place the cut to say what is real and what is quantum.

rubi said:
There needn't be a "reality" underlying quantum mechanics. It's perfectly possible that quantum mechanics will remain the last word. Bell's theorem tells us that any replacement of quantum mechanics by a classical theory that accounts for all the statistical features must be non-local. But there is no need for a replacement of quantum mechanics in the first place.

Yes, one can take that view. But many have not, including Landau & Lifshitz, Dirac, Weinberg, Tsirelson, Bell etc. Perhaps it is pointing to new physics, just as in the Wilsonian effective field theory viewpoint, the UV cutoff points towards new physics.

rubi said:
I don't agree that pushing the measurement to the end of the experiment would require one to deny spacelike separation between the observers. The measurements are still performed by spacelike separated observers and the uncollapsed state predicts exactly all the statistical features, including the correlations that lead to the violation of Bell's inequality.

One has to deny spacelike separation, because to push the measurement to the end means Alice and Bob measure simultaneously, which is possible in one frame since they are spacelike separated. However, since they are spacelike separated, the measurement will not be simultaneous in another frame, so that means we have not pushed the measurement to the end in all frames. If we choose only the frame in which they measure simultaneously, then we will have a preferred frame, which would negate the point of pushing the measurement to the end. So we have to deny spacelike separation, ie. Bob has to deny that Alice performed the measurement at spacelike separation.
 
  • #28
morrobay said:
This is a Bertlemann's Socks correlation model in accord with non - local correlations:
The two drawers ( A and B aligned detectors) space like or time like separated contain socks that are in maximally uncertain states of colors red and blue.
While still maintaining 100% correlation between the outcome of A's and B's measurement .It is there from the very beginning. This type of entanglement enables correlations between far distant events for the socks/photons without predetermination of the measured observable.
You have to specify how to pair detections i.e. what is coincidence window.
Another point is that what you do with the photon beam can change type of correlation (positive/negative). Your model does not seem to be able to do that.
 
  • #29
atyy said:
The measurement problem is not a question about randomness versus determinsim, not about personal likes and dislikes about locality and nonlocality. The measurement problem is that one has to have a classical observer who sits outside the quantum system. We believe the laws of physics also include the observer, but in the orthodox interpretation quantum mechanics cannot be a theory of everything, because the observer always stands apart.

Taking the collapse to be analogous to Bayesian updating doesn't solve the measurement problem, because the observer stands apart, or at the very least has to make the classical/quantum cut. The fundamental reason is that the wave function is not necessarily real. However, we believe the measurement outcomes are real. So the observer must place the cut to say what is real and what is quantum.
Well, the measurement problem is really about whether there is a theory that accounts for every aspect of the world or not. A physicist can be a pragmatist who believes that his theory is only a tool that encodes only the particular aspects of the world that he is currently interested in and apart from that, the theory doesn't really tell him anything about "reality". So if he wants to describe, say, some hydrogen atom, then he doesn't need his theory to include himself (the observer). The only requirement is that more detailed descriptions of the same system mustn't contradict each other (for example a naive description of the hydrogen atom and a description that includes interaction with the environment). There can be many levels of "nested" (in terms of complexity) theories that consistently describe the same phenomenon and need not include the whole universe. You can describe a hydrogen atom on Earth already without including the Andromeda galaxy in the description. The classical/quantum cut is not "real". It's a choice that the physicist makes when he decides which particular theory he wants to use to describe his system. The measurement problem is only a problem to a physicist, who is convinced that there must be some theory of everything that also includes himself. I would argue that this physicist has no basis for his conviction.

Yes, one can take that view. But many have not, including Landau & Lifshitz, Dirac, Weinberg, Tsirelson, Bell etc. Perhaps it is pointing to new physics, just as in the Wilsonian effective field theory viewpoint, the UV cutoff points towards new physics.
There is certainly always new physics to be discovered, but I see no reason to believe that this new physics must be a classical description, so I don't agree that the violation of Bell's inequality implies that nature must be non-local. I also think that the measurement problem and the violation of Bell's inequality are not necessarily related.

One has to deny spacelike separation, because to push the measurement to the end means Alice and Bob measure simultaneously, which is possible in one frame since they are spacelike separated. However, since they are spacelike separated, the measurement will not be simultaneous in another frame, so that means we have not pushed the measurement to the end in all frames. If we choose only the frame in which they measure simultaneously, then we will have a preferred frame, which would negate the point of pushing the measurement to the end. So we have to deny spacelike separation, ie. Bob has to deny that Alice performed the measurement at spacelike separation.
I don't see how pushing the measurement to the end implies Alice and Bob measure simultaneously. What I'm saying is that textbook derivation of the violation of Bell's inequality with conventional quantum mechanics never references the collapse. All probabilities are calculated with the pre-collapsed state.
 
  • Like
Likes bhobba
  • #30
rubi said:
It is also beyond any doubt that these experiments are explained by perfectly relativistically covariant quantum theories.
No, relativistic variants of quantum theories are far from "perfectly relativistically covariant". It is far from perfect simply to use the Heisenberg picture because it allows to hide the non-relativistic elements which the Schrödinger picture makes obvious in the wave function. One could name them perfect Lorentz-ether-compatible theories, but not more.
rubi said:
It's important to note that Bell's theorem neither invalidates convential quantum theory nor special relativity (In particular, it doesn't imply the need for an ether or preferred reference frames or any such things).
Ok, one can say that one has an alternative. Instead of giving up Einstein causality and to return to classical causality in a preferred frame, we can give up causality completely - one does no longer have to try to search for causal explanations of observed correlations, correlations are correlations, such is life, so what. The tobacco industry would be happy.

If you don't want to give up causality at all (not the cheap "signal locality" which I would prefer to name correlarity, but a meaningful notion of causality, which includes Reichenbach's principle of common cause), one is forced to accept a preferred foliation.

rubi said:
It is a matter of fact that the consistency of special relativity doesn't require Bell's locality criterion (or Einstein causality or local causality or whatever we may call it), so the violation of Bell's inequality doesn't cause any problems. Whether we call QM non-local or not is therefore nothing but semantics.
I disagree. Giving up causality kills an important part of science - the search for realistic causal explanations of observed correlations.
 
  • #31
zonde said:
Bertlman socks type model is counterfactualy definite local model that can't create correlations that violate Bell inequality. Your idea was that by giving up CFD but keeping locality it is possible to come up with a model that can violate Bell inequalities. So I don't see that you provided any valid argument for your point.

I gave a model that specifically rejects counter-factual definiteness and predicts the violation of Bell's inequality. Obviously your assertion is wrong.

Oh - I nearly forgot to mention - I make no claim about locality because I don't believe locality applies to correlated systems. But if you do, by a suitable definition of locality, you can reject CFD and keep locality.

Thanks
Bill
 
Last edited:
  • #32
Ilja said:
A quite bad paper, because it ignores that CFD is derived in Bell's proof. Bell uses locality and the EPR criterion of reality to derive CFD. Thus, all the hopes to "give up CFD" are meaningless, once one does not have to assume it, but can derive it.

So? It uses a definition of CFD and locality and shows its violated by QM.

Thanks
Bill
 
Last edited:
  • #33
Ilja said:
No, relativistic variants of quantum theories are far from "perfectly relativistically covariant". It is far from perfect simply to use the Heisenberg picture because it allows to hide the non-relativistic elements which the Schrödinger picture makes obvious in the wave function. One could name them perfect Lorentz-ether-compatible theories, but not more.
There are no non-relativistic elements in relativistic quantum theories. Relativistic quantum theories carry a unitary representation of the Poincare group and this is all that is needed to call them perfectly relativistically covariant and it is also independent of the Heisenberg or Schrödinger picture. Observers that are moving at relative speed agree on all observable facts. This is really universally agreed upon by all serious physicists who understand relativistic quantum theory, which is the broad majority of the physics community.

Ok, one can say that one has an alternative. Instead of giving up Einstein causality and to return to classical causality in a preferred frame, we can give up causality completely - one does no longer have to try to search for causal explanations of observed correlations, correlations are correlations, such is life, so what. The tobacco industry would be happy.

If you don't want to give up causality at all (not the cheap "signal locality" which I would prefer to name correlarity, but a meaningful notion of causality, which includes Reichenbach's principle of common cause), one is forced to accept a preferred foliation.
There is no need to give up causality. I don't see how you get the idea that one would have to do that. It is certainly wrong.

I disagree. Giving up causality kills an important part of science - the search for realistic causal explanations of observed correlations.
The existence of a perfectly relativistically covariant theory that violates Bell's inequality proves that special relativity (which is a synonym for relativistic covariance) doesn't imply Bell's inequality.
 
  • Like
Likes bhobba
  • #34
atyy said:
One has to deny spacelike separation, because to push the measurement to the end means Alice and Bob measure simultaneously, which is possible in one frame since they are spacelike separated. However, since they are spacelike separated, the measurement will not be simultaneous in another frame, so that means we have not pushed the measurement to the end in all frames. If we choose only the frame in which they measure simultaneously, then we will have a preferred frame, which would negate the point of pushing the measurement to the end. So we have to deny spacelike separation, ie. Bob has to deny that Alice performed the measurement at spacelike separation.

The end is not when Alice and Bob have completed their measurements but when they have shared their measurements with Charles (or each other). This final collation of results is made at time-like separation. In which case it does not matter if Alice measures ahead of Bob. There is no preferred frame, the only criterion is that measurement (collapse) must be postponed until the 4 states have been able to interfere. The fact that Alice and Bob enter Schrodinger Cat states is unfortunate but the conceptual problem for realists was anticipated with Wigner's Friend, here played by Charles.
 
Last edited:
  • #35
morrobay said:
This is a Bertlemann's Socks correlation model in accord with non - local correlations:
The two drawers ( A and B aligned detectors) space like or time like separated contain socks that are in maximally uncertain states of colors red and blue.
While still maintaining 100% correlation between the outcome of A's and B's measurement .It is there from the very beginning. This type of entanglement enables correlations between far distant events for the socks/photons without predetermination of the measured observable.
|φ} = 1/√2 ( | HV } - | VH } ) Blue for V Red for H .
Reference Local QFT. vanhess71
Related question: Does this minimal interpretation allow for Bell inequality violations when A and B detectors are not aligned ?
No. It allows a bilinear correlation curve. QM predicts a cosine correlation curve. This is a common misunderstanding - perfect anticorrelation at 0, perfect correlation at 90 and no correlation at 45 degrees is easily achieved with a variant of the red/blue sock model. Bell inequality violation is maximized at 22.5 degrees etc and cannot be achieved with local properties no matter how complicated you make them.
 
  • #36
Ilja said:
Giving up causality kills an important part of science - the search for realistic causal explanations of observed correlations.

You consider that an important part of science - others disagree.

For me its what Feynman says:


Thanks
Bill
 
  • #37
bhobba said:
So? It uses a definition of CFD and locality and shows its violated by QM.
And therefore proves much less than proven by Bell, thus, not worth to be read.

rubi said:
There are no non-relativistic elements in relativistic quantum theories. Relativistic quantum theories carry a unitary representation of the Poincare group and this is all that is needed to call them perfectly relativistically covariant and it is also independent of the Heisenberg or Schrödinger picture. Observers that are moving at relative speed agree on all observable facts. This is really universally agreed upon by all serious physicists who understand relativistic quantum theory, which is the broad majority of the physics community.
Thanks for telling me that I'm not a serious physicist and don't understand relativistic quantum theory. If "observers that are moving at relative speed agree on all observable facts" is all you require for a "perfectly relativistically covariant" theory, you simply have different criteria for what it means. IMHO it means manifest Lorentz invariance of all elements of the theory, not only the final observable results. I have never seen such a thing in the Schrödinger picture, with consideration of a measurement process, but it will not be a problem for you to give me a reference for this, not? Or do I first have to become "serious"?

rubi said:
There is no need to give up causality. I don't see how you get the idea that one would have to do that. It is certainly wrong.
There is. Reichenbach's principle of common cause is, together with Einstein causality, all one needs to prove Bell's inequality. You may, of course, continue to name what remains if one rejects Reichenbach's common cause, but I don't think the sad remains deserve this name. The pure "signal locality" certainly does not deserve it.
rubi said:
The existence of a perfectly relativistically covariant theory that violates Bell's inequality proves that special relativity (which is a synonym for relativistic covariance) doesn't imply Bell's inequality.
No, it only means that one can give up causality essentially, by rejecting Reichenbach's principle of common cause, and continue to name the remains "causality" in such a theory without raising much protest. The tobacco industry will be happy if science will no longer search for causal explanations of observed correlations.
 
  • #38
bhobba said:
You consider that an important part of science - others disagree.
For me its what Feynman says:
Hm, I was unable to localize the disagreement. Instead, I feel nicely supported by his example of astrological influences near 3.20. Slightly reformulated, if it would be true, that the stars could effect something on Earth - which we would observe as a correlation - then all the physics would be wrong, because there is no mechanism - no causal explanation - which would allow to explain this influence.

So, Feynman seems to agree with me that the requirement that for correlations where should be a causal, realistic (even mechanical!) explanation is an important part of science. So important that a theory which does not provide such explanation would be wrong - at least in this example this was quite explicit.
 
  • #39
Ilja said:
Hm, I was unable to localize the disagreement.

If it disagrees with experiment then its wrong. In that one statement is the essence of science - not the search for realism.

Thanks
Bill
 
  • #40
Ilja said:
Thanks for telling me that I'm not a serious physicist and don't understand relativistic quantum theory. If "observers that are moving at relative speed agree on all observable facts" is all you require for a "perfectly relativistically covariant" theory, you simply have different criteria for what it means. IMHO it means manifest Lorentz invariance of all elements of the theory, not only the final observable results. I have never seen such a thing in the Schrödinger picture, with consideration of a measurement process, but it will not be a problem for you to give me a reference for this, not? Or do I first have to become "serious"?
All elements of convential relativistic quantum theory are manifestly Lorentz invariant. Switching between the Schrödinger and Heisenberg picture is nothing more than the application of the time-translation operator, which exists, because we provably have unitary representations of the Poincare group, which includes time-translation operators (This can be verified by anyone who knows how to calculate commutators, so basically every undergraduate student of quantum mechanics). Almost every textbook on relativistic QM or QFT explains it (for example Weinberg vol. 1). (Being serious definitely helps.)

There is. Reichenbach's principle of common cause is, together with Einstein causality, all one needs to prove Bell's inequality. You may, of course, continue to name what remains if one rejects Reichenbach's common cause, but I don't think the sad remains deserve this name. The pure "signal locality" certainly does not deserve it.
It is well known that QM doesn't satisfy Bell's locality criterion ("Einstein causality"). That doesn't mean it is not causal. The cause for the correlations is of course that the quantum system has been prepared in a state that results in those correlations. If you prepare the system in a different state, you will not see the same correlations. This is a perfect cause and effect relationship. Bell's locality criterion is just too narrow and doesn't capture the meaning of the word causality adequately. So please don't force your personal definition of causality on everybody else.

Instead of continuously pointing to Reichenbach, you should maybe also consider Popper at some point.

No, it only means that one can give up causality essentially, by rejecting Reichenbach's principle of common cause, and continue to name the remains "causality" in such a theory without raising much protest. The tobacco industry will be happy if science will no longer search for causal explanations of observed correlations.
This is really just plain logic: A statement of the form "All X satisfy Y" can be disproven by giving an example of an X that doesn't satisfy Y.
 
  • Like
Likes andrewkirk and bhobba
  • #41
bhobba said:
If it disagrees with experiment then its wrong. In that one statement is the essence of science - not the search for realism.
If I would have to summarize the most important point of scientific methodology, I would use a similar formulation. So, no contradiction, only extremal simplification.

And, given that Feynman has not stopped after this first sentence, he thought some more things are worth to be said about science, not?

By the way, at 2.00 he gives another example about what would be scientific to say which contains an explanation of some phenomenon. He does not say "these are simply observations, not related to our theories, thus, science couldn't care less", but gives and explanation. At 2.48 he talks about extrasensory perception - which "cannot be explained by this". He notes that if it could be established that it exists, it would mean physics is incomplete, and it would be extremely interesting to physics.
 
  • #42
rubi said:
Instead of continuously pointing to Reichenbach, you should maybe also consider Popper at some point.
I was intrigued that at least half of Feynman's thoughts in that video were pure Popper. I would think of this as a case of 'great minds think alike' if not for the strange fact that Feynman was well-known to regard philosophy of science as useless ("Philosophy of science is about as useful to scientists as ornithology is to birds").

Popper wrote of falsifiability in 1934, when Feynman was only 16. Most probably Feynman was aware of Popper's ideas - although he may not have been aware that they belonged to Popper, or that Popper was a philosopher of science. Or maybe Feynman came upon them independently and later. I regard Feynman as a good philosopher as well as a brilliant scientist, notwithstanding his ostensible disdain for philosophy.
 
  • #43
andrewkirk said:
I regard Feynman as a good philosopher as well as a brilliant scientist, notwithstanding his ostensible disdain for philosophy.

He was.

Which makes his view on philosophy interesting - anti philosophy is also a philosophy.

Trouble with me is I agree with him - for me philosophy is mostly semantic waffle. Sorry - but I simply can't warm to it even though I gave it a fair go by starting a postgraduate certificate in it - although it turned out more a historical analysis of it rather than the ideas itself. It's simply not my bag.

Thanks
Bill
 
  • #44
rubi said:
All elements of convential relativistic quantum theory are manifestly Lorentz invariant. Switching between the Schrödinger and Heisenberg picture is nothing more than the application of the time-translation operator, which exists, because we provably have unitary representations of the Poincare group, which includes time-translation operators (This can be verified by anyone who knows how to calculate commutators, so basically every undergraduate student of quantum mechanics). Almost every textbook on relativistic QM or QFT explains it (for example Weinberg vol. 1).
Reading for example Fulling, Aspects of Quantum Field Theory in Curved Spacetime p.19, I have a slightly different impression.

"The Schrodinger formalism gives time a privileged role. The Heisenberg point of view permits t and the spatial coordinates to be treated on the same footing, hence permits a geometrically covariant formulation in keeping with the spirit of relativity theory." IOW, the Schrodinger formalism does not permit a manifestly covariant formulation, not? The note "As previously remarked, it is far from clear that a Schrodinger formulation should even make sense for a field system — especially with explicitly time-dependent field equations — because of the difficulty of constructing a Hamiltonian operator" seems also interesting.

rubi said:
It is well known that QM doesn't satisfy Bell's locality criterion ("Einstein causality"). That doesn't mean it is not causal. The cause for the correlations is of course that the quantum system has been prepared in a state that results in those correlations.
I know that if somebody insists on naming these poor remains "causality" it is hopeless to convince him, so be it.

rubi said:
Instead of continuously pointing to Reichenbach, you should maybe also consider Popper at some point.
I have no problem considering Popper. Popper tell's me that a theory which accepts Reichenbachs principle of common cause has a much greater predictive power, because it predicts zero correlation for everything which is not causally connected in the theory.
 
  • #45
Ilja said:
Reading for example Fulling, Aspects of Quantum Field Theory in Curved Spacetime p.19, I have a slightly different impression.

"The Schrodinger formalism gives time a privileged role. The Heisenberg point of view permits t and the spatial coordinates to be treated on the same footing, hence permits a geometrically covariant formulation in keeping with the spirit of relativity theory." IOW, the Schrodinger formalism does not permit a manifestly covariant formulation, not?
Not. The Schrödinger formalism is perfectly equivalent to the Heisenberg formalism. You can rewrite any Lorentz covariant theory into a form that doesn't look Lorentz covariant on first sight. Even Maxwell's equations are usually presented in a form the hides the Lorentz covariance and "gives time a preferred role". That doesn't change the fact that they are Lorentz invariant, as can be seen by rewriting them in tensor notation and these formulations are equivalent. It is exactly the same thing for the Schrödinger and Heisenberg picture and it's absolutely trivial to prove the equivalence and it is done in every textbook.

The note "As previously remarked, it is far from clear that a Schrodinger formulation should even make sense for a field system — especially with explicitly time-dependent field equations — because of the difficulty of constructing a Hamiltonian operator" seems also interesting.
In QFT on CST, the situation is more difficult, since one doesn't have a representation of the Poincare group anymore (obviously, since general spacetimes are usually not Poincare invariant). Anyway, we don't need QFT on CST to prove the existence of a manifestly Lorentz covariant quantum theory. There are lots of trivial examples. Just consult Reed & Simon if you're looking for rigorous proofs.

I know that if somebody insists on naming these poor remains "causality" it is hopeless to convince him, so be it.
It is the standard notion of causality that every scientist acknowledges. And guess what? None of the horror scenarios that you portrayed actually occured. Science is doing fine and progress is made every day.

I have no problem considering Popper. Popper tell's me that a theory which accepts Reichenbachs principle of common cause has a much greater predictive power, because it predicts zero correlation for everything which is not causally connected in the theory.
The common cause for the correlations is the preparation of the state, which is causally connected to the event of observation. QM is actually in full agreement with your beloved principle of common cause.
 
  • #46
rubi said:
It is the standard notion of causality that every scientist acknowledges. And guess what? None of the horror scenarios that you portrayed actually occured. Science is doing fine and progress is made every day.
A horror scenario would appear only if one would take the rejection of Reichenbach's principle seriously and apply it to science in general. This is nothing we should be afraid of. So it simply prevents progress in the explanation of the violations of Bell's inequality - thus, progress toward a more fundamental theory beyond quantum theory.

If such a theory will not be found, because of such rejections, this is not very problematic. It will probably be found some hundred years later anyway. Until this happens, there is enough room yet where a lot of progress can and will be made, in particular by applying Reichenbach's principle. So, no, I do not portray any horror scenario, because I'm sure that scientists will be inconsistent in the rejection of Reichenbach's principle.

And, no, what every scientist acknowledges is only that causality contains also those poor remains. But I doubt that even a large minority of scientists would accept that Reichenbach's principle of common cause could be simply rejected, and that this would not be important for them, because their notion of causality anyway does not contain Reichenbach's principle.

rubi said:
Anyway, we don't need QFT on CST to prove the existence of a manifestly Lorentz covariant quantum theory. There are lots of trivial examples. Just consult Reed & Simon if you're looking for rigorous proofs.
Trivial examples, yes - free particle theories without interactions, as far as I know, and some very special low-dimensional examples. AFAIK, Haag's theorem is yet relevant, not?

rubi said:
The common cause for the correlations is the preparation of the state, which is causally connected to the event of observation. QM is actually in full agreement with your beloved principle of common cause.

Decide what you want to claim:
1.) The principle of common cause holds in QFT.
2.) The relativistic causal structure holds in QFT.
3.) The Bell inequalities are violated in QFT.

Given that the principle of common cause, together with the relativistic causal structure, gives the Bell inequalities, believing all three seems problematic. See for example
E.G. Cavalcanti, R. Lal -- On modifications of Reichenbach’s principle of common cause in light of Bell's theorem, J. Phys. A: Math. Theor. 47, 424018 (2014), arxiv:1311.6852v1 for this.
 
  • #47
rubi said:
The common cause for the correlations is the preparation of the state, which is causally connected to the event of observation.
?
 
  • #48
rubi said:
Not. The Schrödinger formalism is perfectly equivalent to the Heisenberg formalism. You can rewrite any Lorentz covariant theory into a form that doesn't look Lorentz covariant on first sight. Even Maxwell's equations are usually presented in a form the hides the Lorentz covariance and "gives time a preferred role". That doesn't change the fact that they are Lorentz invariant, as can be seen by rewriting them in tensor notation and these formulations are equivalent. It is exactly the same thing for the Schrödinger and Heisenberg picture and it's absolutely trivial to prove the equivalence and it is done in every textbook.
Sorry, but I think you mingle the equivalence of the two formalisms regarding observable predictions with manifest covariance. Manifest covariance means that all parts, even the unobservable parts of the mathematical apparatus, have covariance. A non-covariant formalism may be equivalent to a manifestly covariant one - that means, the predictions about observables will be the same. But this does not make above formalisms manifestly covariant.

The equivalence is indeed quite trivial (if we ignore all the subtleties of field theory) if we fix a time coordinate. But in the Schrödinger formalism this time coordinate plays a very different role than the space coordinates, and nothing in this formalism is manifestly covariant.

I can understand that if you have field operators Phi(x,t) defined for all points of spacetime, then you can define a meaningful way the Poincare group acts on these operators. I can also understand that if you consider complete solutions phi(x,t), say, for a free particle, that an action of the Poincare group on these solutions may be defined.

But if I define a state Psi(Q,t), where Q denotes the configuration of a field, thus, a whole function phi(x), I do not see a simple manifest way to define a nontrivial Lorentz transformation for it.
 
  • #49
zonde said:
I quoted these post from other thread. I don't want to distract discussion in other thread so I'm starting a new one about statements in these posts.

Basically the question is if we can violate Bell inequalities by two separated but correlated systems that can be as non-classical as we like (as long as we can speak about paired "clicks in detectors") i.e. if we give up counter factual definiteness (CFD) but keep locality.
Bhobba and Haelfix are making bold claim that this can be done. But this is just handwaving. So I would like to ask to demonstrate this based on model. Say how using correlated qubits at two spacelike separated places can lead to violation of Bell inequalities in paired detector "clicks"?

Here is an example http://www.ijqf.org/archives/2402. Also, note that it is a realist theory without CFD.
 
  • #50
Ilja said:
Trivial examples, yes - free particle theories without interactions, as far as I know, and some very special low-dimensional examples. AFAIK, Haag's theorem is yet relevant, not?
So you finally acknowledge the fact that there exist perfectly relativistically covariant quantum theories, contrary to your inital claim? (Free QED is already enough to correctly predict the Bell tests by the way.)
Haags theorem is not relevant to the existence of interacting quantum field theories. It just states that they can't be unitarily equivalent to free theories, which is neither necessary nor expected. It is strongly believed that interacting 4d QFT's exist (otherwise the Clay institute wouldn't have put a million dollar bounty on it). It's just that it is mathematically non-trivial and if you look at the details of the interacting phi^4_3 theory, you will see why.

Decide what you want to claim:
1.) The principle of common cause holds in QFT.
2.) The relativistic causal structure holds in QFT.
3.) The Bell inequalities are violated in QFT.

Given that the principle of common cause, together with the relativistic causal structure, gives the Bell inequalities, believing all three seems problematic. See for example
E.G. Cavalcanti, R. Lal -- On modifications of Reichenbach’s principle of common cause in light of Bell's theorem, J. Phys. A: Math. Theor. 47, 424018 (2014), arxiv:1311.6852v1 for this.
The causal relationship I'm talking about is that whenever we prepare the system in a specific entangled state, we will see the correlations and whenever we prepare it in a different state, we don't see the correlations (or see different correlations). Therefore, we can say that the cause for the appearance of the correlations is our preparation of the state. So quantum theory explains the correlations, even if you don't like it, and this is all a scientist needs. If this doesn't satisfy Reichenbachs principle, then Reichenbachs principle is just not a relevant principle, because it is way too strict. And the fact that the only way to save it seems to be to introduce an ether and come up with essentially a conspiracy theory is really more than enough evidence for its rejection.

Derek Potter said:
?
See above.

Ilja said:
Sorry, but I think you mingle the equivalence of the two formalisms regarding observable predictions with manifest covariance. Manifest covariance means that all parts, even the unobservable parts of the mathematical apparatus, have covariance. A non-covariant formalism may be equivalent to a manifestly covariant one - that means, the predictions about observables will be the same. But this does not make above formalisms manifestly covariant.

The equivalence is indeed quite trivial (if we ignore all the subtleties of field theory) if we fix a time coordinate. But in the Schrödinger formalism this time coordinate plays a very different role than the space coordinates, and nothing in this formalism is manifestly covariant.

I can understand that if you have field operators Phi(x,t) defined for all points of spacetime, then you can define a meaningful way the Poincare group acts on these operators. I can also understand that if you consider complete solutions phi(x,t), say, for a free particle, that an action of the Poincare group on these solutions may be defined.

But if I define a state Psi(Q,t), where Q denotes the configuration of a field, thus, a whole function phi(x), I do not see a simple manifest way to define a nontrivial Lorentz transformation for it.
The Lorentz covariance is exactly as manifest as it is in Maxwell's equations. In the Heisenberg picture, you have a non time-dependent state ##\left|\Psi\right>## and operators ##\phi(x,t)## and you get the Schrödinger picture by defining ##\left|\Psi(t)\right> = U(t) \left|\Psi\right>## and ##\phi(x) = U(t)^\dagger \phi(x,t) U(t)##. The state will satisfy the Schrödinger equation defined by the generator of ##U(t)##, as can be easily checked by applying a time-derivative. The time coordinate plays exactly the same role in both pictures. All the expectation values are the same. This is not even specific to quantum theory. You can also formulate classical relativistic theories in an initial-value formulation with a preferred time-coordinate. Even GR has such a formulation (the ADM formalism). There is nothing wrong about rewriting equations in an equivalent way. And even if it were (which it isn't), then free QED in the Heisenberg picture would still be a perfectly manifestly Lorentz covariant quantum theory, which you claim doesn't exist.
 
Back
Top