A Implications of quantum foundations on interpretations of relativity

  • #51
Buzz Bloom said:
I am now also confused by what you posted: "there is no randomness at all in the MWI." I may have misunderstood what I read in Wikipedia.

You should not be trying to understand QM in general, let alone the MWI, by reading Wikipedia.

We have had previous threads on this aspect of the MWI, and I'm pretty sure you were involved in at least one of them, though it might have been a while ago. If you want to rehash the issue again, it should be moved to a different thread.
 
Last edited:
  • Like
Likes vanhees71
Physics news on Phys.org
  • #52
PeterDonis said:
We have had previous threads on this aspect of the MWI, and I'm pretty sure you were involved in at least one of them, though it might have been a while ago. If you want to rehash the issue again, it should be moved to a different thread.
Hi Peter:

At my advanced years I do forget things. I do not remember participating in a previous discussion of MWI.

What I would like to understand is whether or not I have misunderstood the Wikipedia text I quoted. If you think I should start a discussion of this topic in a new thread, I will do that. I also would like to understand the implication regarding randomness in my understanding it you find my understanding of the Wikipedia MWI article to be correct.

Regards,
Buzz
 
  • #53
Buzz Bloom said:
What I would like to understand is whether or not I have misunderstood the Wikipedia text I quoted. If you think I should start a discussion of this topic in a new thread, I will do that.

Yes, please do. It would be too far off topic in this one.

Also, your question should not be whether you have misunderstood the Wikipedia text; Wikipedia is not a good source for actually learning the physics. You really need to look at a QM textbook or paper that talks about the MWI.
 
  • #54
Hello. If I remember well Louis de Broglie wrote on his book about experiment of a particle in a box. Say we divide the box half and half and bring them to Tokyo and Paris . We will find a particle in Tokyo half or Paris half when opened. In this situation the particle, as source of spacetime curvature, change geometry in Tokyo or in Paris but it is not decided before opening. Can this be a case mentioning relation between quantum entanglement and GR ?
 
  • #55
mitochan said:
Can this be a case of relation between quantum entanglement and GR ?

No, because GR is not a quantum theory; there is no way in GR to represent a superposition of two different spacetime geometries, which is what the QM side of your thought experiment would require. We would need a quantum theory of gravity to model such an experiment.

(Note that a single particle's effect on the spacetime geometry would be many, many orders of magnitude too small to measure, now or for the foreseeable future; but it is possible to construct thought experiments where some kind of quantum uncertainty could lead to a superposition of possible positions for an object whose effect on spacetime geometry is measurable.)
 
  • Like
Likes PeroK
  • #56
Thanks. I observe a difficulty in the experiment by myself. Procedure of carrying half boxes would cause measurement of their inertia and collapse the wavefunction before their arrival.
 
Last edited:
  • #57
Buzz Bloom said:
the post in which @Demystifier said "the past, the presence and the future exist on an equal footing." I just searched the entire thread, and I can not find the post in which Demystifier said this. The quote seems to have vanished, perhaps due to some recent editing.
It's in the first post, item 2.
 
  • Like
Likes Buzz Bloom
  • #58
maximus43 said:
vanhees71 said:

What was Bell's opinion of QM"?

Barry
I'm not so sure about this. For me the great merit of Bell's idea is that he brought a pretty unsharp philosophical question about "reality" and the also pretty enigmatic ideas proposed in the (in)famous EPR paper (which Einstein himself didn't like too much) to a clear scientific empirically decidable question, namely whether with a local deterministic hidden-variable theory, starting from a clear mathematical definition of the statististical meaning of such a theory, all statistical predictions of quantum theory can be reproduced. The important point is that he could derive his famous inequality concerning measurements on ensembles, which holds within this class of local deterministic hidden-variable theories but are violated by the predictions of QT. In this way he found theoretically a general scheme, which allows it to decide whether or not a local hidden variable theory can always be constructed leading to the same statistical predictions as QT. I'm not sure, whether Bell expected QT to hold or the local determinstic hidde-variable theories.

At this time it was very difficult to realize such experiments, but there were experimentalists at the time who took up the challenge. The first being successful was Alan Aspect who prepared entangled photon pairs with a atomic cascade using a laser. That was a breakthrough in the preparation of entangled photon states, and he could successfully demonstrate the violation of the Bell inequality for a certain set of measurements on the polarization states of polarization-entangled photons and thus show that, within the uncertainty of the experiment, QT correctly predicts the correlations between the photon polarizations contradicting the predictions of any local deterministic hidden-variable theory:

https://en.wikipedia.org/wiki/Aspect's_experiment

Today the quantum opticians have much more efficient sources for entangled photons making use of non-linear optics possible with strong lasers: There you can produce entangled photon pairs in many kinds of entangled states at high rates, and the corresponding quantum-optics experiments became very accurate, confirming the violation of Bell's inequalities at very high confidence levels. Also many even more exciting experiments could be done, including "quantum eraser experiments, using postselection schemes a la Scully et al" (e.g., Kim et al), "quantum teleportation", "entanglement swapping" (e.g., Zeilinger et al).

Today this field of "quantum informatics" enters a phase, where you can use it for practical purposes, with applications like quantum cryptography and also quantum computing.
 
  • #59
Demystifier said:
It's in the first post, item 2.
Hi @Demystifier:

Thank you very much for for your response. From time to time my memory plays tricks on me. What I remember is that the item I quoted from was about MWI. I have no understanding at all of the "Spacetime interpretation".

Regards,
Buzz
 
  • #60
mitochan said:
Hello. If I remember well Louis de Broglie wrote on his book about experiment of a particle in a box. Say we divide the box half and half and bring them to Tokyo and Paris . We will find a particle in Tokyo half or Paris half when opened. In this situation the particle, as source of spacetime curvature, change geometry in Tokyo or in Paris but it is not decided before opening. Can this be a case mentioning relation between quantum entanglement and GR ?
Maybe you have something like this in mind?
"GR=QM? Well why not? Some of us already accept ER=EPR [1], so why not follow it toits logical conclusion?"
-- Susskind, https://arxiv.org/pdf/1708.03040.pdf

In a more speculative setting, I think there are very interesting possible "interpretations" of spacetime as well as the observer equivalence constraints, that are suggestive toward particular research directions for QG and unification.

In my preferred interpretation, one can not understand the constraints of neither SR nor GR, without also considering how spacetime emerges among interacting "observers". I am closest to an operational interpretation that was mentioned in the first post. The only problem of Einsteins derivation from the two postulates of (observer equivalence) and (invariant upper bound on speed) is that the postulates implicitly containts assumptions about spacetime. My "interpretation" would be to relax postulate one, and replace ti with observer democracy rather than equivalence. In this case the constraints becomes emergent, along with spacetime and matter. There are also many indications that upper bound on speeds follow naturally in information geometric constructions; so the second postulates is likely not needed either. I admit that in my own work I should probably work better to maintain a list of references, which is my I refrain from getting too deep. But these suggesttions have arised in several published places but different authors as well as from my own considerations. A random googling finds for example this, givign you a hint of the general idea, I didnt analyse that paper to depth, but it gets you in the ballpark...

Stochastic Time Evolution, Information Geometry, and the Cram ́er-Rao Boun
" As a consequence of the Cram ́er-Rao bound, we findthat the rate of change of the average of any observable is bounded from above by its variance times thetemporal Fisher information. As a consequence of this bound, we obtain a speed limit on the evolution of stochastic observables: Changing the average of an observable requires a minimum amount of time givenby the change in the average squared, divided by the fluctuations of the observable times thethermodynamic cost of the transformation.
"
- - https://arxiv.org/abs/1810.06832

If you consider a true _intrinsically_ construcible measure of evolution to an agent, then a kind fo stochastic evolution (or probabilistic evolution) seems the only thing at hand.

/Fredrik
 
  • Like
Likes mitochan
  • #61
vanhees71 said:
I'm not so sure about this. For me the great merit of Bell's idea is that he brought a pretty unsharp philosophical question about "reality" and the also pretty enigmatic ideas proposed in the (in)famous EPR paper (which Einstein himself didn't like too much) to a clear scientific empirically decidable question, namely whether with a local deterministic hidden-variable theory, starting from a clear mathematical definition of the statististical meaning of such a theory, all statistical predictions of quantum theory...

Just to add that Bell's theorems rule out any hidden-variable model (deterministic or stochastic) that satisfies "local causality" (defined appropiately).
 
Last edited:
  • Like
Likes vanhees71
  • #63
Some off topic posts have been deleted. Thread reopened.
 
  • #64
Now this is an interesting topic, and one that I'd hope to see discussed seriously little bit more often.

I'm going to react mostly to OP, albeit I have skimmed the responses in the thread too. I'm impressed to see the OP already setting many things right. Something that doesn't happen too often on this topic :)

Demystifier said:
Physicists often discuss interpretations of quantum mechanics (QM), but they rarely discuss interpretations of relativity. Which is strange, because the interpretations of quantum non-locality are closely related to interpretations of relativity.

Indeed, I find it extremely curious that Einstein never made any comments about how non-locality is very trivial to explain in Minkowski's space-time interpretation. Obviously getting feedback "from the future" is completely unproblematic concept if you have already set all of reality as static (a.k.a. "Transactional Interpretation of Quantum Mechanics").

That's not to say Minkowski's idea is unproblematic - it's just to say that non-locality in the EPR circumstance is not an insurmountable problem.

Perhaps it's a sign that Einstein did not really view Minkowski's perspective necessarily as ontologically realistic concept, but rather as a useful mental model. Even after applying that model so comprehensively in the formulation of GR. I think this is rather likely; surely he was well aware that "ontologically real" relativistic simultaneity immediately requires a static universe - and a detachment of consciousness from that static reality.

But it gets more interesting than that.. Let me segway to it via this;

Demystifier said:
3. Ether interpretation. This is not really one interpretation but a wide class of different physical theories. One simple version of the ether theory was developed by Lorentz, before Einstein developed his theory of relativity in 1905. According to ether theories, there are absolute space and absolute time, but under certain approximations some physical phenomena obey effective laws of motion that look as if absolute space and time did not exist. The original Lorentz version of ether theory was ruled out by the Michelson-Morley experiment, but some more sophisticated versions of ether theory are still alive.

This last sentence is quite inaccurate, as Lorentz's ether theory was actually created in response to M&M experiment. His theory is what introduced Lorentz Transformation to us, and it differs from Special Relativity only in philosophical sense (a fact that was well known back in the day, but seems to be often lost in modern descriptions of SR). In fact, Einstein's "On the Electrodynamics of Moving Bodies" was originally called Lorentz-Einstein theory. And that is why we still call the transformation in it Lorentz Transformation.

In modern descriptions the history of SR is often characterized as "first we had Lorentz Aether Theory, then along game M&M, and then Einstein explained it with SR". That is quite a caricature of the actual history.

Now think about this - the historical expectation that M&M experiment should have revealed a universal reference frame for EM propagation, completely hinges on the assumption that space and matter would have completely decoupled existence from one another (e.g. that EM propagation that binds objects is not dependent on the one-way speed of C).

Lorentz (and FitzGerald before him) started to hypothesize on the possibility that macroscopic objects - as manifestation of electromagnetism in themselves - might be dependent on one-way speed of C. That analysis yields Lorentz Transformation as a valid transformation between reference frames. This was a decade before Special Relativity, and it is 100% the same math as used in SR.

Obviously the idea that electromagnetic objects would be dependent on the propagation of electromagnetic fields is not such an "ad-hoc" idea in itself - indeed it is exactly the idea behind one of our most successful modern theory, the Quantum Field Theory. I find the idea of decoupled existence of matter and space much harder to reconcile into a self-consistent model.

Now, the order of historical events is somewhat relevant here;

Demystifier said:
1. Operational interpretation. According to this interpretation, relativity is basically about how the appearance of space, time and some related physical quantities depends on motion (and current position) of the observer. Essentially this is how Einstein originally interpreted relativity in 1905.

Einstein's original paper is indeed quite neutral in philosophical sense. It is employing Lorentz Transformation between reference frames exactly like Lorentz own version, but the arguments to get there are different. The first part of the paper revolves around the well-known fact that one-way speed of light is fundamentally impossible to measure, since you have no means to synchronize your two clocks (this is always trivially true to the fastest information speed available to you - also a commonly known issue in physics community at the time, but nowadays seems to be somewhat lost somewhere in bad education).

Einstein argues that since it is fundamentally impossible to synchronize two spatially separated clocks, it must always be logically and observationally valid to do all your calculations under the assumption that speed of light is exactly C in any given reference frame you want - you will never be able to find yourself being wrong - so long that you apply Lorentz transformation between the frames. (And why this is - logically speaking - should be trivially understandable to anyone who understands Lorentz transformation)

In Lorentz theory the situation is the same - as an electromagnetic creature part of electromagnetic universe, you are fundamentally unable to measure if you are using a correct universal reference frame or not. The logical conclusions in the observational limit are all equally identical to SR.

The philosophical discussion between these two flavors of the same mathematical transformation really boils down to the question - does relativistic simultaneity represent merely our observation limit, or natural structure of reality.

It is true that SR version of it was philosophically more neutral - it did not posit to know "why" relativistic simultaneity is valid in ontological sense, only in purely logical sense.

But then this happened;

Demystifier said:
2. Spacetime interpretation. According to this interpretation, relativity is not so much about the appearance of space and time to observers, as it is about the 4-dimensional spacetime that does not depend on the observer. This interpretation was first proposed by Minkowski. Einstein didn't like it in the beginning, but later he embraced it in his formulation of general theory of relativity. The spacetime interpretation naturally leads to the block-universe interpretation of the world, according to which time does not flow, meaning that the past, the presence and the future exist on an equal footing.

In this interpretation Minkowski simply draws out the fact that, if you take relativistic simultaneity as a real feature of reality, it instantly leads into completely static universe (reality around you cannot have an instantaneous state to it - as observers coinciding with your location but not with your inertial frame would disagree with your idea of what that state is - none of the states would be real prior to observation)

Indeed Einstein expressed dissatisfaction of this idea, but also he did come dangerously close to this conclusion himself the very moment he argued that there's no rational reason to posit that unobservable things such as a universal reference frame for C exists. It is very difficult to reconcile his version with realism without ending up with exactly Minkowski's idea of static reality. (And yet in modern descriptions of Relativity, exactly that idea is thrown around quite willy-nilly)

The philosophical problem of Einstein's argument is that it was not actually neutral either - arguing that observational limits are also limits of existence leads into a specific structure of reality that makes it in itself a philosophical assumption. As he found out when he was arguing the exact opposite perspective in the context of Quantum Mechanics ("surely the Moon must be there when you are not observing it").

This same philosophical stance is taken in many areas of modern physics - for example when arguing that Planck limit is not just an observational limit but also a limit of "existence of things". And it leads into similar complications (which I could discuss also in length).

Now it would be interesting to also discuss the fact that Big Bang Theory is effectively suggesting a universal reference frame (via the supposed simultaneous of emission of cosmic microwave background radiation), and as such can be seen as establishing universal simultaneity (i.e. frame of Lorentz' ether) if you wish to apply the "unobservable things do not exist"-adage. But since the thread is about Bell theorem and non-locality, so let me cut to the chase and point out something rather interesting right there.

What Bell theorem means is that no local realist hidden variable theory can make the same predictions as quantum mechanics. Meaning, it only applies to theories where some hidden variables determine the state of a real object, where that real object exists prior to observation. Meaning, it does not imply local realism is dead, as it only applies to that class of theories that assumes that wave-behavior exhibiting particles objects (such as photons) actually do exist, even though we are only observing detection interactions!

It is very interesting to me that Einstein (and, basically, everyone) always missed the very real possibility that the objects we detect and call "particles" are merely quantized detection events, manifested by wave energies. This idea would have landed squarely on Einstein's "only observable things are real" philosophy. Perhaps his reluctance to think of this possibility had something to do with the fact that he played a very integral role in the conception of "photons" in the first place. At the end of the day, no one has ever seen a photon, we have only seen detection events that imply a model where they exist. Detection events that could be explained also by positing a quantized mechanism to the interaction mechanism itself (as oppose to positing the existence of quantized carriers).

In my mind it's very simple; Bell theorem is an explanation as to why in our models "particles" cannot actually be placed there where we see wave-like behavior, unless we are also ready to throw out either locality or realism.

So how about instead of throwing away realism or locality, we throw away the idea of particles? In that case, actually a local realist explanation of Bell experiment becomes quite trivial. Place an observational limit (instead of "existence limit") to quantized EM detection events (you can't observe it unless it manifests an interaction event), and what you get is fully wave-like propagation of EM energy from emission to the two detection sites. Modification of the wave-like energy through polarization filters (or any mechanism that do not cause a "collapse" - i.e. yield an actual detection event) would yield a cosine correlation to the "probabilities of quantized detection interactions to occur". Not a great surprised - the wave propagation is best described by Schrödinger's Equation - so if we manage to keep the propagation as waves, from emission to detection, we expect to always get a result that is fully aligned with QM expectations, while maintaining fully ordinary local realist mechanisms.

The critical difference is - you don't have the idea of a free-flight particle with definitive properties to themselves - the properties we observe are only determined by the actual interaction event, which in itself is quantized (so it's occurrence is probabilistic, depending on the underlying wave reality - and it may not occur at all even when the underlying wave energy does exist). In this case, any modifications to the detection probabilities at the two sites will yield a cosine correlation (a completely ordinary wave feature) and this would not be possible if there was particles in free-flight (sans letting go realism or localism).

If the above hand-wavy description doesn't explain the crux of it, I wrote a more complete description of this same fact starting from page 8 here.
(Assuming you are familiar with the wave description of polarization filters. If not, a short description of those appear earlier in that same article)

So, what's the point of all this? The point is, do not simply assume that "if it can't be measured, it does not exist". Contrary to popular belief, that philosophy does not automatically yield an ideally elegant "Occam's Razor" philosophy. It can be effective if applied right, but sometimes it just makes your models more convoluted down the line.

-Anssi
 
  • Skeptical
  • Like
Likes Lynch101, gentzen, weirdoguy and 1 other person
  • #65
AnssiH said:
I find it extremely curious that Einstein never made any comments about how non-locality is very trivial to explain in Minkowski's space-time interpretation.
Nonlocality by itself can be accommodated by an interpretation like the Transactional Interpretation, as you say, yes.

What cannot be accommodated by any interpretation involving classical spacetime is superposition. For example, suppose we set up a "Schrodinger's cat" type experiment where, instead of a random quantum event like a radioactive decay determining whether a cat is alive or dead, have it determine whether or not a significant change in the distribution of matter occurs--for example, whether a ball with enough mass to register in a Cavendish-type experiment goes to the left or to the right. No classical spacetime model can describe this experiment, because it involves a superposition of different spacetime geometries (more precisely, it involves the entanglement of the spacetime geometry with other degrees of freedom). In a classical spacetime model, there is only one spacetime geometry. The geometry can be determined dynamically by the distribution of matter, but there is no way to model a superposition of different matter distributions being entangled with the spacetime geometry and causing a superposition of different spacetime geometries.
 
  • #67
PeterDonis said:
Nonlocality by itself can be accommodated by an interpretation like the Transactional Interpretation, as you say, yes.

What cannot be accommodated by any interpretation involving classical spacetime is superposition. For example, suppose we set up a "Schrodinger's cat" type experiment where, instead of a random quantum event like a radioactive decay determining whether a cat is alive or dead, have it determine whether or not a significant change in the distribution of matter occurs--for example, whether a ball with enough mass to register in a Cavendish-type experiment goes to the left or to the right. No classical spacetime model can describe this experiment, because it involves a superposition of different spacetime geometries (more precisely, it involves the entanglement of the spacetime geometry with other degrees of freedom). In a classical spacetime model, there is only one spacetime geometry. The geometry can be determined dynamically by the distribution of matter, but there is no way to model a superposition of different matter distributions being entangled with the spacetime geometry and causing a superposition of different spacetime geometries.

Hi Peter :)

Actually in Transactional Interpretation there's no need to accommodate for the concept of superposition. It is a mysterious concept only in Copenhagen (for reasons I outline in my post). In TI there are also probabilistic components to our expectations, but it wouldn't mean those objects would actually be in superposition - it would just mean there are probabilistic outcomes to our expectations.

The only reason why superposition is not viewed as a component of observer ignorance is Bell experiments, and as soon as you have a mechanism to explain them, you have no superposition anymore.

Cheers,
-Anssi
 
  • Skeptical
  • Sad
Likes gentzen and PeroK
  • #68
PeterDonis said:
This is not a valid reference for PF discussion. Are there any published papers that describe this model?

If you read it, you will see it's not even model, but merely a discussion of an interpretation of QM.

And since it's written by me, it should be in accordance to the guidelines for me to discuss it, in so far as I can tell.

If you feel otherwise, I can remove the link and replace it with the same text that is found behind the link (really, not that different from what I'm discussing in the post, just explaining the same issues in more detail).

Not trying to be facetious, but this area of the forum is decidedly about QM interpretation, and the text behind the link is actually discussing the direct consequences of completely established models of modern physics (such as the standard view of refraction in transparent materials, or the behavior of polarization filters).

Surely it must be according to the guidelines to discuss the impact of completely established theories to possible interpretations of QM. (I mean if it isn't, then why are we here :smile:)

-Anssi
 
  • #69
AnssiH said:
in Transactional Interpretation there's no need to accommodate for the concept of superposition.
There is if you want to try to apply it to QM, as you are doing here.

AnssiH said:
The only reason why superposition is not viewed as a component of observer ignorance is Bell experiments, and as soon as you have a mechanism to explain them, you have no superposition anymore.
Superposition is part of the basics of QM. You can't just wave your hands and say it goes away.
 
  • #70
AnssiH said:
If you read it, you will see it's not even model, but merely a discussion of an interpretation of QM.
Your post #64 goes well beyond "discussion of an interpretation of QM". It is your personal research unless you can give a reference to an already published, peer-reviewed paper that supports the claims you are making.

AnssiH said:
And since it's written by me, it should be in accordance to the guidelines for me to discuss it, in so far as I can tell.
PF rules prohibit discussion of personal research. It's personal research unless and until you get it published in a peer-reviewed journal.

AnssiH said:
If you feel otherwise, I can remove the link and replace it with the same text that is found behind the link
That wouldn't change any of the above.

AnssiH said:
the text behind the link is actually discussing the direct consequences of completely established models of modern physics (such as the standard view of refraction in transparent materials, or the behavior of polarization filters).
Then you need to give references to the "completely established models" that make the claims you are making.
 
  • #71
PeterDonis said:
There is if you want to try to apply it to QM, as you are doing here.Superposition is part of the basics of QM. You can't just wave your hands and say it goes away.
Hi Peter

It's always a bit difficult to gauge on online forums what is the level of understanding of the other parties given a particular topic, so I don't know how detailed my explanation of something needs to be. My apologies if couple of shoddy sentences are not enough to point out the relevant bits :P But actually you seem to have quite a distorted view of what is the role of superposition in quantum mechanics.

Historically the idea that superposition represents some kind of a real state of a real object has got its root in the EPR paradox, and in Bell Experiment. See, the argument that EPR paradox put forward was exactly the idea that "superposition must be just observer ignorance - otherwise we lose local realism". A lot of people saw that as a pretty good argument. But what Bell's Theorem points out is that in all theories where those particles-to-be-observed exist prior to observation, we will lose local realism.

It is now viewed as almost synonymous to QM because almost all common interpretations operate with the concept of superposition. But it doesn't mean all interpretations do.

In fact it is precisely the entire point of Transactional Interpretation that locality can be preserved by advanced and retarded waves - there is no "superposition" because emission site can have information about the "upcoming detection event" before it happens.

Meaning, it doesn't "explain superposition" because it's not an interpretation where superposition even occurs.

I personally don't think it's a very interesting interpretation, but it is certainly valid.

For references to published material for that one, there's a whole list in here:
https://en.wikipedia.org/wiki/Transactional_interpretation

Have fun,
-Anssi
 
  • Skeptical
Likes weirdoguy and EPR
  • #72
PeterDonis said:
Your post #64 goes well beyond "discussion of an interpretation of QM". It is your personal research unless you can give a reference to an already published, peer-reviewed paper that supports the claims you are making.

PF rules prohibit discussion of personal research. It's personal research unless and until you get it published in a peer-reviewed journal.

That wouldn't change any of the above.

Then you need to give references to the "completely established models" that make the claims you are making.
Well admittedly there can be a fuzzy line between "here are my thoughts" and "personal research" (I mean the posts are literally just discussions I've written about a topic).

I'm little befuddled where that line ought to be drawn - I mean it is a discussion forum after all. But I'm confident none of my discussions represent any kind of distortion of very established physical models. I'm also confident that I'm clearly expressing what represents opinions and what represents facts. I'm confident because this whole discussion is about identifying what are opinions in the sociological history of physics.

As of references, if you may check out the link, in practically every section it contains a link to a simple youtube video discussing a bog standard facet of modern physics, on standard physics channels like Sixty Symbols. I'd hope that's enough to convince you there's nothing non-standard going on in there. But if not, I can spend some time tomorrow looking for any standard paper about a modern understanding of things like transparency and refraction.

Also please let me know if any of what I'm saying sounds somehow suspect to you (or anyone else). I know some bits might sound superficially odd, but that's almost always because some factoids are lost in the modern representations of physics (but the relevant historical references are actually quite easy to find with about one google search).

Regards
-Anssi
 
  • Skeptical
Likes weirdoguy
  • #73
AnssiH said:
the idea that superposition represents some kind of a real state of a real object
Is not at all what I'm saying. I'm just saying that superposition--and also entanglement, which is really more the issue, as my previous post made clear--is part of standard QM (various interpretations put different meanings on it, but they all have it), so you can't claim that you are somehow making superposition just go away and also claim that you are discussing an established model. You can't have it both ways.

AnssiH said:
it doesn't mean all interpretations do.
Please give a reference to an established interpretation of QM that does not have superposition.

AnssiH said:
Transactional Interpretation
Please give a reference on the TI that makes the same claims you are making. I don't think one exists; I think what you are calling the "Transactional Interpretation" is not the established TI itself, but your own personal version of it. That's why I keep asking you for references.

AnssiH said:
I'm confident none of my discussions represent any kind of distortion of very established physical models.
I'm not. That's why I keep asking you for references. If you point me at a reference (a textbook or peer-reviewed paper) and say "this is the established physical model I'm talking about", then I (and other readers of this thread) can evaluate for myself whether I agree that you're describing it correctly and that it makes the same claims you are making. Without that, all we have is what you claim, and that's not enough.
 
  • #74
Demystifier said:
The past, presence and future exist on an equal footing.
Exist at once ?
 
  • #75
physika said:
Exist at once ?
Define "at once"!
 
  • Like
Likes PeroK
  • #76
physika said:
Exist at once ?
If you consider 4D spacetime, there is no longer the concept of that 4D spacetime evolving - as there is no alternative time parameter outside that spacetime with which to parameterise its evolution. You simply have a single 4D manifold with no concept of evolution.

The mistake, IMO, is to think that this has some physical significance in terms of causality or "simultaneous" existence of past and future. For any object in that spacetime things evolve according to its measurement of time, and that object has clearly distinguishable past, present and future.
 
  • Like
Likes vanhees71
  • #77
AnssiH said:
As of references(...) it contains a link to a simple youtube video discussing a bog standard facet of modern physics, on standard physics channels like Sixty Symbols.

Youtube videos are not appropriate references. Only peer-revieved papers and textbooks are.
 
  • Like
Likes AnssiH
  • #78
This is also clear in the geometric picture given in the first paragraph. For simplicity let's consider special relativity within an arbitrary global inertial reference frame. Restricting yourself to only one spatial dimension you can depict it as a usual Minkowski diagram in the plane, but you must forget about the Euclidean ideas about this plane and substitute it with the geometry imposed by Minkowski space, where the fundamental bilinear form is (in our (1+1)-dimensional "world", ##x \cdot y=x^0 y^0-x^1 y^1##). A proper orthochronous Lorentz transformation then transforms to another inertial frame, keeping the Minkowski product invariant, i.e., you have with ##x' =\hat{\Lambda} x## and ##y'=\hat{\Lambda} y## that ##x \cdot y =x' \cdot y'##. This implies that instead of circles in the Euclidean plane you have the hyperbolae ##x \cdot x=C=\text{const}##, where here now we can have ##C \in \mathbb{R}##. For ##C>0## you get time-like hyperbolae, for ##C<0## space-like hyperbolae, and for ##C=0## the hyperbola degenerates to the lightcone ##x^0=\pm x^1##.

The points in the plane can be identified with "an event", i.e., something happening at a time ##t## (##x^0=c t##) at a position ##x^1## (e.g., a short flash of light). One can show that the order of time between two events ##x_A## and ##x_B## is invariant under Lorentz transformations if and only if the two events are timelike or lightlike separated, i.e., for ##(x_A-x_B) \cdot (x_A-X_B) \geq 0##.

Now, if two events are causally connected, i.e., the event ##x_A## is the cause of the event ##x_B## (e.g., if ##x_A## is a flash of light and ##x_B## is its detection an observer), you must have ##x_A^0<x_B^0## in your current inertial frame of reference and also in any other inertial frame of reference, i.e., ##(x_B-x_A)## must be timelike or lightlike separated (for our flash example it's of course light-like separated). The Lorentz transformation between two inertial frames must be orthochronous, i.e., ##{\Lambda^0}_0 \geq 1## in order that the ordering of times for causally connected events (i.e., events that timelike or lightlike separated) is invariant under the Lorentz transformation. This fixes the physical symmetry group to the proper orthochronous Lorentz group (i.e., the subgroup of all Lorentz transformations which is a Lie group that is smoothly connected to the identity operator). For our (1+1)-dimensional world that's the transformations given by the matrices
$$\hat{L}=\begin{pmatrix} \cosh \eta & -\sinh \eta \\ -\sinh \eta & \cosh \eta \end{pmatrix},$$
i.e., the proper boosts with ##\mathrm{det} \hat{L}## and ##{L^0}_0 \geq 1##. The boost velocity is given in terms of the "rapidity" ##\eta## by ##\beta=v/c=\tanh \eta##.

The specific mathematical choice of the Lorentz manifold (an affine pseudo-Euclidean manifold with a fundamental form of signature ##(1,3)## or, equivalently ##(3,1)##) is indeed such that you have 1 time-like and 3 space-like coordinates enabling one to establish the "causal structure" in complete analogy to the above explained example of the somewhat simpler (1+1)-dimensional case.
 
  • #79
PeterDonis said:
Please give a reference to an established interpretation of QM that does not have superposition.Please give a reference on the TI that makes the same claims you are making. I don't think one exists; I think what you are calling the "Transactional Interpretation" is not the established TI itself, but your own personal version of it. That's why I keep asking you for references.I'm not. That's why I keep asking you for references. If you point me at a reference (a textbook or peer-reviewed paper) and say "this is the established physical model I'm talking about", then I (and other readers of this thread) can evaluate for myself whether I agree that you're describing it correctly and that it makes the same claims you are making. Without that, all we have is what you claim, and that's not enough.
Hi Peter

I understand - and appreciate - that you are here trying to maintain high quality of content. But you are going to have to follow the references I give before you complain I'm not giving you anything.

I'll break it down as much as I can, but I can't do all the reading and thinking for you.

I pointed you out to the TI Wikipedia article because it gives you the first point of contact to judge its merit, in case you are not at all familiar with it.

To dig deeper, the very first reference in that page is the https://www.researchgate.net/publication/226312851_Transactional_Interpretation_of_Quantum_Mechanics on ResearchGate (free to download).

Before you even download the .pdf, the images in the abstract are spacetime diagrams that explain the crux of the concept.

1622481042477.png


Which is that all interactions are local in a strong sense of "locality" - it's just that absorber interacts with the emitter via "advanced wave", meaning a wave propagation backwards in time along the light-like cone surface.

So if you understand the topic of superposition, at this point already it should be clear there is no such thing as "superposition" occurring in the paradigm of this interpretation. "Entanglement" is just what observers call a circumstance where transaction yielded seeming non-local correlation between detection sites. It's really not very complex concept if you think of it from a "static spacetime" perspective (which is why it's interesting that Einstein never made any comments about this possibility)

That paper is quite long and involved, especially if you are not too familiar with the topic, so I won't blame you if you don't investigate it too carefully. But if you do, it does offer a lot of interesting commentary about the history of quantum interpretations. Unfortunately the PDF is captured as images (as opposed to text), but let me just point out few interesting details for your benefit;

1622481890122.png

What he is getting at here is that there's non-locality of "strong" kind (that would allow establishing simultaneity and thus would conflict with the basic tenet of SR), and weak kind (that demonstrates seemingly impossible correlations -> Bell experiments).

He chooses to call the second kind also "non-locality". Personally I find it simpler to view locality as having all interactions pass via light-like connections, and thus I'd call TI "local" too. But that's just semantics - the main thing is to understand the topic being discussed, not agree with what words to use.

1622482187630.png


This is referring to the exact topic of my post from yesterday - about the philosophical pitfall of considering unobservable things as not existing - even when we have a fully logical reason for that observational limit to exist (in the context of SR, it is the stance that simultaneity of events does not exist).

1622482312739.png


(Here SV is "State Vector")
So this is a direct reference to Einstein complaining about the loss of "strong" sense of locality, and he is preparing to point out the difference to the Bell type of non-locality where no actual superluminal communication can be accomplished, but instead we are merely talking about unexpected correlations between detection events;

1622482485208.png


Then he establishes the simple fact that if you allow a handshake to occur within a static time block (or if you wish you can call it as a transaction occurring forward and backward in time), we can explain the case of "enforcement of correlations in spatially separated measurements" (Hopefully it's clear but if not - he means Bell Experiments);

1622482579901.png


And here's a simple schematic of the concept again:

1622482830490.png

There's no superposition or entanglement here - there is only the idea of completely static spacetime, where some patterns "look like" enforced correlations, but superposition does not actually occur.

PeterDonis said:
Is not at all what I'm saying. I'm just saying that superposition--and also entanglement, which is really more the issue, as my previous post made clear--is part of standard QM (various interpretations put different meanings on it, but they all have it), so you can't claim that you are somehow making superposition just go away and also claim that you are discussing an established model. You can't have it both ways.
I'm afraid there are multiple topics getting a bit mixed up here. First of all, entanglement is a concept that was pushed into the limelight by EPR paradox and Bell Theorem. References to that claim here. It is true that entangled properties are problematic in the very original QM formalism already - EPR merely brought the problem into clearer attention. Transactional Interpretation - the bog standard version - is simply an example of an interpretation where there is no instantaneous effects across spacelike separated events - other than as an illusion to observers that observe forward flow of time. The point here is that TI falls out directly from Minkowski's interpretation of Relativity, where the universe is seen as a static structure (nothing prevents only forward causal patterns along light-like planes)

This is why I said I'm surprised Einstein never made any comment about this simple possibility. I'm almost certain he would have understood it exists as a logical possibility, but was not perhaps too enamored of pushing it as a realistic interpretation of reality.

I'm afraid you may be also confusing TI with some of my other comments about the fact that non-locality appears as a direct consequence of insisting that objects like photons exist in free-flight. You see, TI is not the only way to preserve locality and realism, it's just one example of this. Now I'm sure there are numerous comments made in all sorts of published papers and articles alluding to that fact, but I would hope it's quite easy for anyone to think through themselves too if they merely understand the basic history of quantum mechanics. After all, it strikes at the very heart of wave-particle duality. But let me spell it out too (some thinking will be required);

EPR paradox (reference already provided) was formed as an argument against the idea that particles (such as photons) don't have a definitive state to themselves prior to observation. The many subtle flavors surrounding this idea are extensively discussed in the TI paper I referred to above. What EPR is trying to establish is the idea that there still are particles in reality, and they must have a definitive state prior to observation (exact 180 degree shift to Einstein's original stance about unobservable things).

Bell's Theorem is a response to EPR argument, and the fundamental issue it points out is that it is not possible to preserve local realism in any view where those particles exist prior to observation. You see, the state of the system is already completely described by a wave function, and the correlations we get would not be problematic to locality if we didn't have any particles in free flight (but instead we only had quantized detection events of wave energies). This assertion is nothing but bog-standard physics.

First of all, detection events of EM energies are best described as quantized. This is normally described as an electron being able to only absorb energy in discrete steps (which is often viewed as an indication of electron being a harmonic wave in itself). The history of this view starts from Max Planck solving "Ultraviolet Catastrophe" by assuming quantized energy absorption. This is in fact a crucial moment in the history of Quantum Mechanics - not something I just made up.

Also the modern view of transparent materials is exactly that the configuration of the electrons in the material have a wide gap in the EM spectrum right on the visible light range, which means EM radiation cannot get absorbed (since the absorption is quantized). At the same time, optical refraction of said light is viewed as being caused by the light passing through the material as a wave - interfering with itself and yielding a bent path as a total sum of all the interference. This is also bog standard physics - not my idea.
References here.

So, if you've never really thought through the actual logic behind Bell's Theorem (and the experiments thereof), it might seem surprising to find this out. But if you have thought them through, you might be aware of something curious. Heck, the TI paper even points this simple fact out;

1622484703842.png

(p. 654)
The point there is that in normal wave picture, the "energy loss" (which in QM formalism represents the probability of observation) is a cosine correlation to the filtering angle. This is bog standard wave mechanics, not something I made up. If you have multiple filters in a chain, the energy loss of the next one is a function of the relative angle to the previous one. The coincidence rate is expected (and also observationally is) to be the same in the case of pushing energy through chained filters, or when placing the filters far apart and measuring entangled wave-fronts or photons.

In case of photons, there's a well known problem that adding a diagonal filter in between two completely orthogonal filters will increase the rate of passing "photons" from 0% to 25%. This is an example of wave-particle duality - a case where particle view falls apart. In the meantime, a wave view yields exactly the expectations we get (to figure out the expectations you just need basic trigonometry - a cosine correlation)

So if you simply consider all of these ideas together - all already assumed to be valid by modern physics - you should be able to trace down the expectations of a standard Bell Experiment. A full wave picture of that experiment yields the same exact correlation expectation via chained filters, as it does via spatially separated filters. The probability of absorption at a detection site is dependent on the filter configuration via a simple cosine correlation, as can be traced with simple trigonometry analysis. The QM formalism expectation is purely a cosine correlation. Not that surprising, being that quantum mechanism is wave formalism.

See;
https://en.wikipedia.org/wiki/Bell's_theorem#/media/File:Bell.svg

The linear correlations in this picture refer specifically to the idea that particles have a definitive state in flight. The non-linear correlation refers to QM expectation, where only wave propagation is modeled.

That simply means that with a picture where you have no particles, your expectation is the wave expectation. Nothing more, nothing less. Not my idea, just pointing out features of Bell's Theorem. So you see, the only case where that picture of "enforced correlations" yields an implication of non-locality is the case where the detection events - which are presumed to be quantized by themselves, are presumed to have be caused by particles instead of waves. The idea of superposition and "spooky action of a distance" requires the assumption of particles in free flight - pure wave picture does not yield those concepts.

This is the entire point of so-called wave-particle duality; that concept exists because particle description doesn't cut it in these situations, and wave picture in others. Various interpretations of QM really are fundamentally just about proposing where the limit between these behaviors lie. The idea that photons might not be a good concept is not very controversial either - of course many phycisists often make comments about how neither is really suitable. Also it's good idea to remind oneself of the sociological history of the concept of photons. Reference to a published article here.

So to get back to what was my point about this - it is very peculiar that here everyone in the history of QM seemed to insist that things we cannot directly observe exist (even when we can't see behavior that would suggest they do), but at other times made a whole lot of fuss about unobservable things not exists. It is also interesting that everyone seems to think that wave-particle duality is an easier concept to reconcile with realism than the idea that there's no free-flight quantized carrier entities. I wouldn't consider it a foregone conclusion myself (<- but that's just an opinion right there)

What I would have asked everyone back in the day is, what happens to the excess energy of a "photon" when an "electron" cannot absorb it all. 🤔 (That's a reference to energy conservation - not my idea)

Sorry about the length, but you keep asking for references to pretty standard things, so I can't know what parts of physics you already have understanding of and what parts need to be spelled out... But it's always fun to discuss this so thanks!

Regards
-Anssi
 
  • Like
  • Skeptical
Likes Lynch101, gentzen and weirdoguy
  • #80
AnssiH said:
you are going to have to follow the references I give
You haven't given any valid ones. Wikipedia is not a valid reference. Videos are not a valid reference. You need to be giving references to textbooks and peer-reviewed papers. Either do that or you will receive a warning and be banned from further posting in this thread.

(Note: You do give a reference to a Cramer paper a little further in your post. I'll take a look at it, but I'm already familiar with the basics of the TI from reading other things Cramer has written, so I doubt that paper will tell me anything I don't already know.)

AnssiH said:
you keep asking for references to pretty standard things
No, I'm not. I'm asking for references that support the particular claims you are making. I have already explained in previous posts why the particular claims you are making are not just "standard things".

AnssiH said:
I'll break it down as much as I can
You don't need to "break it down" any further. You need to give valid references that "break it down" the way you are breaking it down.

AnssiH said:
I can't do all the reading and thinking for you.
This comment almost got you a warning and thread ban all on its own. This attitude is not productive. Continuing to act like you are just expounding "standard" physics in the face of repeated posts explaining why you aren't is not productive discussion.
 
Last edited:
  • Like
Likes Lynch101, vanhees71 and weirdoguy
  • #81
Demystifier said:
Define "at once"!

"Past" "Present" "Future"
all together, all the events side by side.
All, at once.
 
  • #82
physika said:
"Past" "Present" "Future"
all together, all the events side by side.
All, at once.
What you really mean is "all at the same time" - falling headlong into the trap of imagining a second time parameter outside spacetime.
 
  • #83
PeroK said:
What you really mean is "all at the same time" - falling headlong into the trap of imagining a second time parameter outside spacetime.

You can't say "all at the same time" because there is no time.
 
  • #84
physika said:
You can't say "all at the same time" because there is no time.
Precisely! And "all at once" means, literally, "all at the same time". So, you can't say that either.
 
  • #85
PeroK said:
Precisely! And "all at once" means, literally, "all at the same time". So, you can't say that either.
Well, "at once" is more abstract, but
it could be said then, "All, side by side"
 
  • #86
PeroK said:
If you consider 4D spacetime, there is no longer the concept of that 4D spacetime evolving - as there is no alternative time parameter outside that spacetime with which to parameterise its evolution. You simply have a single 4D manifold with no concept of evolution.

The mistake, IMO, is to think that this has some physical significance in terms of causality or "simultaneous" existence of past and future.
Even if the illusion of the experience of flow of parameter time in that picture is consistent with the "timless picture" as as whole, the big problem is if you challenge this view to follow from an inference. Then to infer the timelss laws, that are required to make the timeless block universe picture manifest, one nees to make many repeats of the observations, and the timescale of the experiments must be short enough to allow for an inference in a timely manner. This abstractions works perfect for small subsystems, which is why it's no surprse that there is no arrow of time or cosmological time in subatomic physics. The laws are just inferred in a way so that the follows by construction! It's what you get, when you repeat many "short" itnteractions and abduce from it, the regularities left, these will by construction be timeless, becausae time is exactly what is averaged away in the inference.

But that same construction does not make much sense for cosmological scales. This is why it is not easy to get rid of time on cosmo scale. The application in absurdum of that same paradigm to cosmological perspectives (without corroboration) is what smolin coined the "cosmological fallacy". And from the perspective of inference I think it's very clear that it is a real fallacious extrapolation. See https://arxiv.org/pdf/1201.2632.pdf

So the argument would be that on short subsystem you can reduce time to a parameter, but I think it's fair to say the concept is suspicious (at minimum) when applied to the whole universe.

/Fredrik
 
  • #87
physika said:
"Past" "Present" "Future"
all together, all the events side by side.
All, at once.
Then yes, at once.
 
  • Like
Likes physika
  • #88
Demystifier said:
The past, presence and future exist on an equal footing.
physika said:
Exist at once ?
Demystifier said:
Define "at once"!

Define past, present and future!
 
  • #89
Notions along lines such as these interest me-
 
  • #90
Demystifier said:
The past, presence and future exist on an equal footing.
As the central thing that makes sense of this equal footing is the existence of a timeless law. This is what reduces everything to the choice initial conditions. The timeless law means the whole history is determined.

For some this is fine, especially if you are ontologically inclined, and care less about getting an inferential handle on this.

The above block universe implies, that nothing happens. The laws are fixed, static, and all there is to explain is the choice of initial conditions, which tend to get us into an incredible fine tuning trap. Also, the choice of the observer is here indeed a complete redundancy. This is of course a fatal flaw for anyone (me included) that tries to put this all in an inference perspective. The block universe, composes provides on handle for inferences. What can you do with this? And all choices that connect us to physical situations seems totally arbitrary?

From the inferential perspective hower, one asks: Which physical inference process allows the inference of timeless laws? It seems clear that in general that makes no sense, except for the specific case of "short lived" small subsystems. Then one can in a FAPP sense, infer the timeless laws.

That leaves us it seems, with the idea that cosmologial time is the REAL time, and this can not be reduced away.

/Fredrik
 
  • #91
PeterDonis said:
(Note: You do give a reference to a Cramer paper a little further in your post. I'll take a look at it, but I'm already familiar with the basics of the TI from reading other things Cramer has written, so I doubt that paper will tell me anything I don't already know.)
Excellent! I was about to give you that reference for the third time, which would have started to seem a bit odd 😅

By "follow my references" I simply meant that I had already given you exactly that same Cramer's paper as a reference by saying "there's a whole list of references in this Wikipedia page" - first of which was exactly that paper. Sorry if that was unclear (I appreciate you may have better things to do with your time than try to follow references through 2 links :) )

Anyhoo now that that's cleared out, the one particular point you wanted clarification on was whether or not "superposition" is a feature of quantum mechanics, or a feature of particular interpretations of QM - you seem to believe it is the former, and I made the claim it's the latter.

This thread is about the connection between SR and QM interpretations, and transactional interpretation operates in the static Minkowski spacetime via positing temporally two directional transactions, to produce Bell experiment correlations in that static spacetime structure. In other words, that acts as an explanation to those observables that are in Copenhagen viewed as "superposition" and "entanglement". Meaning, the concept of superposition doesn't appear in TI. Meaning, superposition is a feature of some interpretations. I'm surprised that you are familiar with TI but not familiar with its transaction mechanism. I realize there may not be exactly these same words used in his paper about this - which is why I said it requires some amount of understanding / thinking this thing through, to realize what I'm saying is true (possible distorted semantics aside). It is not an attack on you or anyone as a person - just a general statement of the circumstance (everyone need to think about these things in order to understand them - to believe something without understanding it is the exact anti-thesis of scientific philosophy 😑)

ps, I noticed few people expressed doubt emoticons to my post. Would love to know what parts about it - and it would be interesting to discuss those parts, whatever they might be. Yes, I can dig up published references to whatever it is you guys have doubts about! We are here to increase each other's understanding.

Best regards!
-Anssi
 
  • Like
Likes Lynch101
  • #92
AnssiH said:
the concept of superposition doesn't appear in TI
Yes, it does. The superposition is in the multiple "offer waves" that get sent out, and the multiple "response waves" that get sent back. One of those offer-response pairs is randomly selected to become the actual result; that corresponds to collapse.

Also, in my post #65, which was where I brought up superposition in the context of this thread, I did not refer to superposition in general, but to superposition of different spacetime geometries. No interpretation of QM has a way to deal with that, including the TI. TI says that offer and response waves travel along light cones; but if we have a superposition of different spacetime geometries, we have a superposition of different light cone structures, and TI cannot handle that.
 
  • #93
Demystifier said:
Summary:: If the Bell theorem is interpreted as nonlocality of nature, then what does it tell us about the meaning of Einstein theory of relativity?

Physicists often discuss interpretations of quantum mechanics (QM), but they rarely discuss interpretations of relativity. Which is strange, because the interpretations of quantum non-locality are closely related to interpretations of relativity.

What different interpretations of QM can tell us about those interpretations of relativity? Which interpretations of relativity seem natural from the perspective of which interpretations of QM?

Wow, how did I miss this thread? All of my recent publications deal directly with this topic!

SR, GR, QM, and QFT can all be understood as providing adynamical constraints in the block universe, see our book, "Beyond the Dynamical Universe" (Oxford UP, 2018). But, I much prefer our recent results based more precisely on principle explanation, since that does not depend on the block universe view or any other ontological claims.

This paper was the 15th most downloaded physics paper in Scientific Reports for 2020: Answering Mermin's Challenge (https://www.nature.com/articles/s41598-020-72817-7). See Top 100 in Physics (https://www.nature.com/collections/ihggebhehd). Here is a shorter layperson's version in ScienceX: Einstein's missed opportunity to rid us of 'spooky actions at a distance' (https://sciencex.com/news/2020-10-einstein-opportunity-spooky-actions-distance.html). Here is a 3-min video linking to our 2021 paper in Entropy: Beyond Causal Explanation (https://encyclopedia.pub/10904). I have attached a pedagogical version under review at AJP. I have also attached our essay that just won Honorable Mention in the Gravity Research Foundation 2021 Essay Contest where we extend the idea to GR; this paper is under review at IJMPD with other winning essays in the GRF essay contest.

Of course, Demystifier is aware of all this work, I'm just posting for any newbies drawn to his thread :-)

The bottom line is that the so-called "nonlocality" evidenced by quantum entanglement does not render QM "incomplete" or "wrong" as some claim. In fact, QM is as complete as possible given that everyone must measure the same value for Planck's constant h. Indeed, the mystery of quantum entanglement and the ineluctably probabilistic nature of QM are necessary consequences of that fact, i.e., the relativity principle applied to the measurement of h. This is in complete analogy to SR where the mysteries of time dilation and length contraction are necessary consequences of the relativity principle applied to the measurement of c. We're hoping this principle account of quantum entanglement will catch on, since it's already widely adopted for SR in the introductory physics textbooks.

People are still free to consider constructive counterparts (causal mechanisms) such as the luminiferous aether or pilot waves. But, theories of the aether were abandoned long ago, so I don't hold out much hope for causal accounts of quantum entanglement.
 

Attachments

  • Like
Likes Lynch101, AnssiH and PeroK
  • #94
PeterDonis said:
Yes, it does. The superposition is in the multiple "offer waves" that get sent out, and the multiple "response waves" that get sent back. One of those offer-response pairs is randomly selected to become the actual result; that corresponds to collapse.
Now you are arguing against Cramer not AnssiH.
From https://www.researchgate.net/publication/226312851_Transactional_Interpretation_of_Quantum_Mechanics:
We note here that the sequence of stages in the emitter-absorber transaction presented here employs the semantic device of pseudo-time”, describing a process between emitter and absorber extending across lightlike or timelike intervals of spacetime as if it occurred in a time sequence external to the process. This is only a pedagogical convention for the purposes of description. The process itself is atemporal, and the only observables come from the superposition of all of the steps that form the final transaction.

and

The “standard” Transactional Interpretation, with its insights into the mechanism behind wave function collapse through transaction formation, provides a new view of the situation that make the retreat to Hilbert space unnecessary. The offer wave for each particle can be considered as the wave function of a free particle and can be viewed as existing in normal three dimensional space. The application of conservation laws and the influence of the variables of the other particles of the system comes not in the offer wave stage of the process but in the formation of the transactions. The transactions “knit together” the various otherwise independent particle wave functions that span a wide range of possible parameter values into an interaction, and only those wave function sub-components that are correlated to satisfy the conservation law boundary conditions are permitted to participate in transaction formation. The “allowed zones” of Hilbert space arise from the action of transaction formation, not from constraints on the initial offer waves, i.e., particle wave functions.

PeterDonis said:
Also, in my post #65, which was where I brought up superposition in the context of this thread, I did not refer to superposition in general, but to superposition of different spacetime geometries. No interpretation of QM has a way to deal with that, including the TI. TI says that offer and response waves travel along light cones; but if we have a superposition of different spacetime geometries, we have a superposition of different light cone structures, and TI cannot handle that.
Isn't that your personal research? Or you have reference? I why do you make such arguments in interpretations subforum anyways?
 
  • #95
AnssiH said:
So how about instead of throwing away realism or locality, we throw away the idea of particles? In that case, actually a local realist explanation of Bell experiment becomes quite trivial. Place an observational limit (instead of "existence limit") to quantized EM detection events (you can't observe it unless it manifests an interaction event), and what you get is fully wave-like propagation of EM energy from emission to the two detection sites. Modification of the wave-like energy through polarization filters (or any mechanism that do not cause a "collapse" - i.e. yield an actual detection event) would yield a cosine correlation to the "probabilities of quantized detection interactions to occur". Not a great surprised - the wave propagation is best described by Schrödinger's Equation - so if we manage to keep the propagation as waves, from emission to detection, we expect to always get a result that is fully aligned with QM expectations, while maintaining fully ordinary local realist mechanisms.
It just does not work. Detection events are paired up like particles. You can't throw away that because it's just experimental fact (I believe experimental limit for efficiency of pairing up downconverted photons is around 99%).
 
  • Like
Likes AnssiH
  • #96
zonde said:
Now you are arguing against Cramer
I don't see where you're getting that from, since nothing in what you quoted from Cramer's paper contradicts what I said.

zonde said:
Isn't that your personal research?
No, it's a well known fact that is one of the key motivations for searching for a theory of quantum gravity, which anyone with the background knowledge to post in an "A" level thread on this topic should already be aware of.
 
  • Like
Likes vanhees71 and weirdoguy
  • #97
PeterDonis said:
I don't see where you're getting that from, since nothing in what you quoted from Cramer's paper contradicts what I said.
Sure, I can explain where I see contradiction. In standard QM the process of getting a measurement is rather sequence of events. There is a time when there is wavefunction (which can be represented as superposition dependent on the choice of basis) and later there is collapse and result of measurement. Cramer on the other hand says it is "atemporal process" (actually seems like an oxymoron), so it should mean that "offer waves" actually do not exist at any moment in time. So there is no temporal process of wavefuntion collapse.
And in standard QM there is superposition of states of entangled systems that leads to specific measurements observed in Bell inequality tests. And here Cramer says : "The application of conservation laws and the influence of the variables of the other particles of the system comes not in the offer wave stage of the process but in the formation of the transactions." So there is no superposition of entangled pair states. "offer waves" for each particle are independent. They determine outcome of measurement of particular particle and only at outcome level they become interdependent. Clearly he proposes to discard many particle superpositions.
 
  • Like
Likes AnssiH
  • #98
zonde said:
In standard QM the process of getting a measurement is rather sequence of events.
No, it isn't. The basic math of QM is "atemporal", just like Cramer describes the TI; it is a mathematical process for making predictions. There is no claim made that that mathematical process corresponds to an actual physical process that takes place in time. Some interpretations of QM make such a claim (obviously the TI is not one of them), but not the basic math of QM.

Similar remarks apply to what you say about "superpositions" later in your post.
 
  • Like
Likes romsofia
  • #99
PeterDonis said:
No, it isn't. The basic math of QM is "atemporal", just like Cramer describes the TI; it is a mathematical process for making predictions. There is no claim made that that mathematical process corresponds to an actual physical process that takes place in time. Some interpretations of QM make such a claim (obviously the TI is not one of them), but not the basic math of QM.
Before measurement entangled state of pair of particles is described as
##|\psi \rangle=|H_A H_B\rangle+|V_A V_B\rangle## (1)
After measurement it is say
##|\psi \rangle=|V_A V_B\rangle## (2)
If you say that there is no time when (1) is true, and (2) is true at all times for particular pair of particles, then there is no superposition.
If you say that (1) is true at all times then you are using no collapse interpretation.
 
  • #100
zonde said:
Before measurement entangled state of pair of particles is described as
##|\psi \rangle=|H_A H_B\rangle+|V_A V_B\rangle## (1)
After measurement it is say
##|\psi \rangle=|V_A V_B\rangle## (2)
No, that's not what the basic math of QM says. The basic math of QM only says that your state (1) is the one you use to predict the probabilities for various results of the measurement, and state (2) is the one you use to predict the probabilities for future measurements once you know the result of this one.

Any other claim is interpretation dependent.
 
  • Like
Likes vanhees71 and PeroK

Similar threads

Back
Top