Question about discussions around quantum interpretations

  • Context: Undergrad 
  • Thread starter Thread starter ojitojuntos
  • Start date Start date
  • Tags Tags
    Amateur Quantom physics
Click For Summary

Discussion Overview

The discussion revolves around the various interpretations of quantum mechanics, particularly focusing on deterministic versus non-deterministic models, the implications of Objective Collapse theories, and the measurement problem. Participants explore the philosophical and experimental aspects of these interpretations, as well as the challenges posed by quantum mechanics to traditional notions of determinism.

Discussion Character

  • Debate/contested
  • Conceptual clarification
  • Exploratory

Main Points Raised

  • Some participants note that the complexity of quantum interpretations arises from the lack of experimental data that definitively falsifies deterministic interpretations like Everett's.
  • Others argue that the measurement problem highlights fundamental randomness in the universe, with Bell tests and double-slit experiments supporting this view.
  • One participant suggests that if Objective Collapse theories are being challenged, it may imply a greater likelihood of a deterministic universe, though this remains speculative.
  • Another viewpoint emphasizes that any interpretation of quantum mechanics could potentially be true, given the abstract nature of quantum phenomena.
  • Some participants express that understanding quantum mechanics deeply is crucial before delving into its interpretations, referencing comprehensive textbooks as a foundation.
  • There is a discussion about the distinction between non-relativistic quantum mechanics and more comprehensive frameworks like Quantum Field Theory (QFT), with some asserting that QFT is an inevitable low-energy approximation of a more fundamental theory.
  • Participants highlight that many physicists accept the probabilistic nature of quantum mechanics, challenging the belief in a fundamentally deterministic universe.
  • Some express that the notion of determinism might emerge from a lack of knowledge about complex systems, suggesting that non-determinism could be an effective description rather than a fundamental aspect of reality.

Areas of Agreement / Disagreement

Participants do not reach a consensus on the nature of quantum interpretations, with multiple competing views remaining. There is ongoing debate about the implications of experimental results for determinism and the interpretation of quantum mechanics.

Contextual Notes

Limitations include the unresolved status of various interpretations, the dependence on definitions of measurement and determinism, and the challenge of reconciling quantum mechanics with classical intuitions.

  • #121
Fra said:
this is the basic message of when experiment and QM violates Bells inqeuality.
Not quite. It's true that "deterministic, but with epistemic uncertainty" forces you to accept something like the Bohmian interpretation.

But "fundamentally probabilistic" forces you to accept that even though there is no way even in principle to predict in advance what the experimental results will be, the results for entangled particles measured at distant locations still have to obey the constraints imposed by the overall quantum state of the system. For example, measurements of spin around the same axis on two entangled qubits in the singlet state will always give opposite results. That always is what makes it very hard to see how a "fundamentally probabilistic" underlying physics could work--how could it possibly guarantee such a result every time?

In short, the real "basic message of when experiment and QM violates Bells inqeuality" is that nobody has a good intuitive picture of what's going on. There is no interpretation that doesn't force you to accept something that seems deeply problematic.
 
  • Like
Likes   Reactions: bhobba
Physics news on Phys.org
  • #122
ojitojuntos said:
Hello guys. OP again. I appreciate the discussion and thorough explanations for a layman. I have one more question about the measurement problem:
If randomness arises at measurement, and we can’t pinpoint how the collapse occurs, why is it that most physicists (according to polls I’ve seen online) consider that reality is fundamentally probabilistic, instead of deterministic, but with epistemic uncertainty?
Is this correct? Or, in a simpler sense, do experimental results and the math seem to lean more towards fundamental probabilities?
This thread has highlighted that ultimately it is perhaps a matter of personal taste whether "nature is fundamentally probabilistic" or not. Let's go back to the simple example of a single radioactive atom. Taken at face value, there is nothing in the description of the atomic state that determines when the atom will decay. That is nature being probabilistic at the fundamental level.

However, saying that the atom decays entails the complication of measurement and a suitable measurement apparatus. And, you could claim that if the state of everything in the experiment was known, then you would know in advance when the atom was measured to decay. And, no one can disprove this claim.

Moreover, given the complexity of a macroscopic measurement device, it's practically (and perhaps even theoretically) impossible to know its precise state. You would need to start by measuring the measuring device - entailing a much more extensive measurement problem.

I can't speak for professional physicists, but my instinct is to accept the first (probabiltistic) analysis. It feels closer to what nature is telling us. The second analysis seems to impose our thinking on nature. That, ultimately, no matter how loudly nature appears to be telling us that it's fundamentally probabiltistic, we appeal to an inately human demand for determinism to explain away the apparent probabilities. And demand that under it all there is actually pure determinism at work.
 
  • Like
Likes   Reactions: Sambuco, ojitojuntos, martinbn and 1 other person
  • #123
PeterDonis said:
Not quite. It's true that "deterministic, but with epistemic uncertainty" forces you to accept something like the Bohmian interpretation.
PeterDonis said:
In short, the real "basic message of when experiment and QM violates Bells inqeuality" is that nobody has a good intuitive picture of what's going on. There is no interpretation that doesn't force you to accept something that seems deeply problematic.
Fair enough, but I didn't even count Bohmian mechanics, as it introduces so many new issues that are far more impalatable to me that the original problem, seeking increasingly more "improbable" loopholes etc :nb)

I only pay attention to it when Demystifier has a "bad day" and presents it like this.

/Fredrik
 
  • #124
Fra said:
I didn't even count Bohmian mechanics, as it introduces so many new issues that are far more impalatable to me that the original problem
As I said, every QM interpretation has features that are unpalatable. It's just a question of what kinds of unpalatability you prefer to accept.
 
  • Like
Likes   Reactions: bhobba
  • #125
ojitojuntos said:
Hello guys. OP again. I appreciate the discussion and thorough explanations for a layman. I have one more question about the measurement problem:
If randomness arises at measurement, and we can’t pinpoint how the collapse occurs, why is it that most physicists (according to polls I’ve seen online) consider that reality is fundamentally probabilistic, instead of deterministic, but with epistemic uncertainty?
Is this correct? Or, in a simpler sense, do experimental results and the math seem to lean more towards fundamental probabilities?
The following text excerpt may be helpful (from: Rudolphina - Universität Wien https://rudolphina.univie.ac.at/en/quantum-physics-demands-a-new-understanding-of-reality):

What is the difference between probabilistic and deterministic descriptions?

Classical physics describes the world in a deterministic way—which means that we can predict outcomes by thinking about how events would certainly unfold under ideal conditions. Probabilistic descriptions, on the other hand, are only able to say how probable a given measured result is. Classical physics also works with probabilistic descriptions, "but these are merely an expression of the fact that we do not know the true circumstances," says Časlav Brukner. In other words, probabilities in classical physics merely reflect our ignorance. "Yet this is not the case in quantum physics, where probabilities occur in a fundamental, non-reducible way—there is no deterministic cause behind them." This means that the world, at its core, is indeterminate.
 
  • #126
Fra said:
Fair enough, but I didn't even count Bohmian mechanics, as it introduces so many new issues that are far more impalatable to me that the original problem, seeking increasingly more "improbable" loopholes etc :nb)

I only pay attention to it when Demystifier has a "bad day" and presents it like this.

/Fredrik
Which features of Bohm do you find impalatable, out of interest?
 
  • #127
PeterDonis said:
There is no interpretation that doesn't force you to accept something that seems deeply problematic.

And many, like me, think that is because we have no DIRECT experience with the quantum world. It is tough to apply how we think about the world, shaped by experience in the macro world, to something we have no experience of.

Even further, the Effective Field Theory approach to QFT seems to be saying, accepting principles such as cluster decomposition, concepts rooted in direct experience of the everyday world, for regions we can currently probe, (ie including the no direct experience part) ,things can't be other than what QM says:

https://en.wikipedia.org/wiki/Effective_field_theory
Steven Weinberg's "folk theorem" stipulates how to build an effective field theory that is well behaved. The "theorem" states that the most general Lagrangian that is consistent with the symmetries of the low energy theory can be rendered into an effective field theory at low energies that respects the symmetries and respects unitarity, analyticity, and cluster decomposition.

Its physical content seems to be in abstract mathematical concepts like symmetry and constants that must be put in by hand - we have zero idea where they come from (as of now)

What was the title of that crazy 1963 movie - It's a Mad, Mad, Mad, Mad World.

As far as meaning goes, even the so-called Measurement problem is still debated after all these years:


Thanks
Bill
 
  • Like
Likes   Reactions: weirdoguy
  • #128
bhobba said:
many, like me, think that is because we have no DIRECT experience with the quantum world
It's not just that we have no direct experience with the quantum world. It's that we do have tons of direct experience of a world that is not quantum--that behaves classically. Yet there doesn't seem to be any way to get that classical world out of QM without having to believe something very unpalatable.

bhobba said:
the Effective Field Theory approach to QFT seems to be saying, accepting principles such as cluster decomposition, concepts rooted in direct experience of the everyday world, for regions we can currently probe, (ie including the no direct experience part) ,things can't be other than what QM says
Only if you interpret those principles very loosely.

For example, take "cluster decomposition". There are different statements of it in the literature, but basically it boils down to, distant experiments can't influence each other. Which sounds fine until you run into Bell inequality violations and other counterintuitive results of experiments on entangled particles--for example, if you and I each measure one of a pair of entangled qubits in the singlet state, you're on Earth and I'm on a planet circling Alpha Centauri, and our measurements are both along the same spin axis, we must get opposite results. How in tarnation can that happen if the measurements can't influence each other?

The usual answer involves words like "nonlocality", but that's not actually an answer, it's just a restatement of the problem. Nobody has a good answer to how it can be like that. We have a very good answer as to what theoretical framework to use to make predictions--yes, at sufficiently low energies, that's going to be a QFT of one form or another, as the effective field theory approach says. But as for what's going on "under the hood" that makes those QFT predictions work out? Nobody has a good answer.
 
  • Like
  • Love
Likes   Reactions: mattt, javisot and bhobba
  • #129
iste said:
Which features of Bohm do you find impalatable, out of interest?
One issue is going from basic non-relativistic QM to QFT.

Our own Demystifier examines some of the issues:
https://arxiv.org/abs/2205.05986

Thanks
Bill
 
  • Like
Likes   Reactions: iste
  • #130
PeterDonis said:
The usual answer involves words like "nonlocality", but that's not actually an answer, it's just a restatement of the problem. Nobody has a good answer to how it can be like that.

Aside from the math, of course—that explains it no problem—but 'shut up and calculate' seems to many just a copout (I am one of those many).

As usual, excellent post, Peter—cudos.

I only want to mention I prefer factorizability to locality, as QM is Bell non-local rather than non-local in the usual sense:
https://plato.stanford.edu/entries/bell-theorem/
'The principal condition used to derive Bell inequalities is a condition that may be called Bell locality, or factorizability. It is, roughly, the condition that any correlations between distant events be explicable in local terms, as due to states of affairs at the common source of the particles upon which the experiments are performed.'

I wince at some of my past posts on this, before the distinction penetrated my thick skull (your posts were part of finally understanding this)

Thanks
Bill
 
  • #131
bhobba said:
I prefer factorizability to locality
Note, though, that factorizability is how we would intuitively express the cluster decomposition principle. So this doesn't really help as far as our intuitions are concerned.

In terms of mathematical precision, of course, factorizability wins since "locality" is vague and can be interpreted different ways.
 
  • Love
  • Like
Likes   Reactions: Sambuco and bhobba
  • #132
  • #133
bhobba said:
From what I can see, the article (and its first part--what you linked to is the second part) talks about QBism and consistent histories.

QBism is fine as far as it goes--but it doesn't go very far.

QBism says that the quantum state isn't physically real; it's not a direct representation of the actual, physical state of the system. QBism says that the probabilities in QM are Bayesian--they're descriptions of our state of knowledge about whatever physical system we're trying to make predictions about.

But QBism doesn't say anything about what the actual, physical state of the system is. But that's what most people seem to want from a QM interpretation, QBism doesn't give that.

Consistent histories, from what I can tell, is just restating decoherence theory, and then waffling about whether that amounts to there really being just one history (meaning there's an actual collapse somewhere) or not (meaning no collapse and many worlds).
 
  • Like
Likes   Reactions: bhobba
  • #134
PeterDonis said:
n terms of mathematical precision, of course, factorizability wins since "locality" is vague and can be interpreted different ways.

For those following along, you may be wondering how Weinberg accepted the cluster decomposition property, who, without doubt, was aware of Bell.

I can't give the details (page number, etc). Still, if I recall correctly, Weinberg addresses this early on in his justly famous Quantum Theory of Fields (great as a reference but not good for a first exposue - for that, if you are mathematically advanced enough to know some functional analysis, I suggest 'WHAT IS A QUANTUM FIELD THEORY? A First Introduction for Mathematicians by MICHEL TALAGRAND which I am currently studying - just basic HS QM required, believe it or not).

Anyway, here is a recent take on it:
https://arxiv.org/html/2501.12018v1

Thanks
Bill
 
  • #135
PeterDonis said:
Consistent histories, from what I can tell, is just restating decoherence theory, and then waffling about whether that amounts to there really being just one history (meaning there's an actual collapse somewhere) or not (meaning no collapse and many worlds).

For those who want to investigate consistent histories further:
https://quantum.phys.cmu.edu/CHS/histories.html

Griffiths (no, not the Griffiths of the standard EM textbook fame) has kindly made his textbook available for free.

Thanks
Bill
 
  • Like
Likes   Reactions: Sambuco and Morbert
  • #136
bhobba said:
here is a recent take on it
This paper basically seems to be saying that the correlations that break factorizability don't vanish when you look at the correct measurement operators--the ones that describe measurements at the spatially separated locations where they're actually made.
 
  • Like
Likes   Reactions: bhobba
  • #137
PeterDonis said:
This paper basically seems to be saying that the correlations that break factorizability don't vanish when you look at the correct measurement operators--the ones that describe measurements at the spatially separated locations where they're actually made.

Yes.

But it says that the further separated they are, the more observations are needed to detect that they cannot be factored. I take cluster decomposition to mean experiments can always be separated far enough that, for all practical purposes, it is true.

'Nevertheless, the larger the spatial separation, the greater the amount of needed experimental data might become in order to make a violation of the Bell’s inequality visible.'

Thanks
Bill
 
  • #138
bhobba said:
it says that the further separated they are, the more observations are needed to detect that they cannot be factored
This would need to be experimentally tested. They don't give any specific numbers, but there are experiments showing Bell inequality violations in measurements at, IIRC, kilometer distances now.
 
  • Like
Likes   Reactions: bhobba
  • #139
bhobba said:
I take cluster decomposition to mean experiments can always be separated far enough that, for all practical purposes, it is true.
That isn't the way cluster decomposition is presented in the literature, though. There's no claim that it breaks down for measurements that are spacelike separated, but not far enough.

I'm also not sure that cluster decomposition is really a necessary principle. The really necessary principle, I think, is that spacelike separated measurements have to commute--because their time ordering is not invariant, so it can't matter which one is done first. QFT obeys that principle exactly; Bell inequality violations don't violate it.
 
  • Like
Likes   Reactions: mattt and bhobba
  • #140
PeterDonis said:
How in tarnation can that happen if the measurements can't influence each other?

The usual answer involves words like "nonlocality", but that's not actually an answer, it's just a restatement of the problem. Nobody has a good answer to how it can be like that. We have a very good answer as to what theoretical framework to use to make predictions--yes, at sufficiently low energies, that's going to be a QFT of one form or another, as the effective field theory approach says. But as for what's going on "under the hood" that makes those QFT predictions work out? Nobody has a good answer.
I once asked about this, but the question wasn't understood. For example, in the case you present, does anything prevent one particle from being in one state and the other in the opposite state by pure chance?

Suppose the above happens every time we measure—that is, every time we measure a particle it has spin down, and when we measure the other particle it always has spin up—can't it just happen by chance, without any intervening interaction? Does something in QM prevent that from happening and there must be an interaction (local or non-local)?
 
Last edited:
  • #141
javisot said:
does anything prevent one particle from being in one state and the other in the opposite state by pure chance?
"Pure chance" wouldn't make it happen every single time.

javisot said:
can't it just happen by chance
No, because "chance" would mean you would get different results on different runs. That's the definition of "chance". If you get opposite results every single time, that's not "chance".
 
  • Like
Likes   Reactions: bhobba and javisot
  • #142
javisot said:
without any intervening interaction?
QM doesn't say there is an "interaction" between the two entangled particles. Certain QM interpretations do, but not all of them.
 
  • Like
Likes   Reactions: bhobba
  • #143
PeterDonis said:
"Pure chance" wouldn't make it happen every single time.


No, because "chance" would mean you would get different results on different runs. That's the definition of "chance". If you get opposite results every single time, that's not "chance".
Does any theorem or principle prevent it, or do you intuitively answer that it's not possible?. I also believe it's not possible, but I was wondering if something in the QM formalism prevents it.
 
  • #144
javisot said:
Does any theorem or principle prevent it
The definition of "chance" prevents it. That isn't something specific to QM.
 
  • Like
Likes   Reactions: bhobba
  • #145
javisot said:
Does any theorem or principle prevent it, or do you intuitively answer that it's not possible?. I also believe it's not possible, but I was wondering if something in the QM formalism prevents it.
How about Cournot’s principle?

See here for some books, presentations, and articles where it is discussed:
gentzen said:
And also “Scientific Reasoning : The Bayesian Approach” by Colin Howson and Peter Urbach (2006) ...
One other interesting discussion point in that book was that Cournot’s principle is inconcistent (or at least wrong), because in some situation any event which can happen has a very small probability. Glenn Shafer proposes to fix this by replacing “practical certainty” with “prediction”. He may be right. After all, I mostly learned about Cournot’s principle from his Why did Cournot’s principle disappear? and “That’s what all the old guys said.” The many faces of Cournot’s principle. Another possible fix could be to evaluate smallness of probabilities relative to the entropy of the given situations.
 
  • Like
Likes   Reactions: javisot
  • #146
PeterDonis said:
The definition of "chance" prevents it. That isn't something specific to QM.
The definition of chance in the Spanish dictionary is: "A combination of circumstances that cannot be predicted or avoided." The definition doesn't prevent a result from always occurring; it simply dictates that it's something that couldn't be predicted or avoided.

Could you be referring to another definition of chance? A mathematical definition perhaps? (I think your answer is correct; it's not something specific to QM, but rather to our understanding of probability)


So let's say that:

-There is entanglement produced by local interaction

- There is entanglement produced by non-local interaction

-There is no entanglement without local or non-local interaction

Right?
 
  • #147
javisot said:
The definition of chance
In physics is that you are using a model that predicts a random distribution of outcomes. If your model predicts the same outcome every single time, it's not "chance" as far as physics is concerned.

javisot said:
Could you be referring to another definition of chance?
I'm referring to the definition in physics. See above.

javisot said:
So let's say that:

-There is entanglement produced by local interaction

- There is entanglement produced by non-local interaction

-There is no entanglement without local or non-local interaction

Right?
I'm not sure because I'm not sure what you're trying to say with the above.

Particles become entangled by being prepared in an entangled state. The preparation process is local. But after being prepared in an entangled state, the entangled particles can be separated, to arbitrary distances, and measurements on them will still show the correlations predicted by the entangled quantum state. In the particular case of two spin-1/2 particles prepared in the singlet state, measurements of their spin in the same direction will always give opposite results.

That's the physics.
 
  • Like
Likes   Reactions: mattt
  • #148
PeterDonis said:
The definition of "chance" prevents it. That isn't something specific to QM.

Indeed.
https://math.ucr.edu/home/baez/bayes.html

'It turns out that a lot of arguments about the interpretation of quantum theory are at least partially arguments about the meaning of the probability!'

Thanks
Bill
 
  • #149
iste said:
Which features of Bohm do you find impalatable, out of interest?
I'll get back to this later, have been busy and not had time to add readable posts in the Barandes thread either. I will get back and compare BM with Barandes SQC from a different angle, that illustrate common problems of Bohmian and Barandes picture, but what makes one more palatable. But none of them are satisfying; both needs to be completed. After all that comparasion is fair as both sort of aspires on a kind of HV approach, but in different ways... so ill respond anoter time

/Fredrik
 
  • #150
iste said:
Which features of Bohm do you find impalatable, out of interest?
When discussing "interpretations" my perspective is always on solving the open foundational problems such as for example
  • Reduce fine tuning as it deflates explanatory value (Cosmological constant + SM model parameters or vacuum selection in string theory... all these manual settings suggests we really are missing something, string theory had an ambition here, but traded the SM parameter emergecen for something even worse in the vacuum selection)
  • Problem of time (distinction between evolution and dynamics and their different contexts)
  • Observer/Measurement problem (meaning and resolution of tension between inferential perspectives)
I think any pure interpretation of a model, in the sense of just thinking differently about the same math will never solve these problems. But it can help identify conceptual handles including different mathematical models that via correspondence make equivalent descriptions/predictions.

My preferences for which interpretations I find palatable are purely rooted in how likely I think their perspectives is to help make progress among the many, likely related, open questions.

I see largest potential using tools like ABM models of evolutionary IGUS, and comparing BM and SQC BM has an akward perspective that does not naturally mate with ABM and IGUS - but SQC does.

One of the key issues is that the nomological guide in both SQC and BM needs to be determined from Schrödinger Eq then via correspondences we get guidance or transition probabilites. This is both unsatisfactory.

But the difference is the nomological guidance in BM sits at global system dynamics level. Ie you need to consider the global system to make an inference about the future, and also specify initial conditions of the global system. This is fine at some Gods view level, but it is invalid for any subsystem view.

In SQC the nomological guidance sits at the subsystem/part level, even if there is a global time dependence, and the stochastic process in time, is independent of the initial conditions of other remote subsystems.

To try to fix the problems in SQC view, seems to be easily merged with ideas from ABM, seeking first principle constructions of the transition probabilities and release ourselves from the schrödiner equation and the hamiltonian. The transition probabilities could instead be perhaps understood as evolutionary atttractors in some way. This is possible thanks to the fact that Barances moves the dynamical law, from system level to stochastic subsystemlevel.

To fix the problem in the BM view, seems hard, beacse the causal rules is hardcoded at global, system level. But an real observer is a subsystem, and inferences at system level seems to be out of information capacity bounds for a real observer. And although mathematically a possibile in a descriptive sense, it gets opaque as it hides the constructing principle of internal interactions, which is alos why it gives apparent non-local effects, but I have no idea how to work with it from the inference perspective or how it helps a real observes learn about its environment - which I see as the central task; and how it describe it from the inside and not from an gods view.

So the main key is the level of where the nomological constructs work. System dynamics as universal paradigm is deeeply problematic, because it emergence there are usually more like dynamica attractors, that are essentially fine tuned via initial conditions or priors. With a bigger model of evolutionary attractors, self-organisations fixes the fine tuning as part of hte physical process. The only problem is that, the evolutionary changes can't always be captured bt a timeless law; ie embedded in a higher dimension. Trying to do that misses the point as it keeps trying to put the ruling constraints in a bigger fictional embedding.

/Fredrik
 
  • Like
  • Informative
Likes   Reactions: iste and ojitojuntos

Similar threads

  • · Replies 34 ·
2
Replies
34
Views
3K
  • · Replies 45 ·
2
Replies
45
Views
8K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 109 ·
4
Replies
109
Views
11K
  • · Replies 35 ·
2
Replies
35
Views
2K
  • · Replies 52 ·
2
Replies
52
Views
7K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 37 ·
2
Replies
37
Views
7K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K