I Question about discussions around quantum interpretations

  • #101
PeterDonis said:
Oh, for goodness' sake. I'm sorry to sound blunt here, but if you feel cheated because you didn't think to look at the inside cover of the book before buying it, to me that's on you, not Feynman or Ralph Leighton.
I didn't buy the book, I read the copy from a public library. I don't remember whether I noticed that actually Leighton was the author. But I certainly remember that I was convinced that Leighton was a physicist and coauthor of the Feynman lectures. Maybe Preface and Acknowledgment of "QED: The Strange Theory of Light and Matter" were responsible for this:
Leighton said:
If you are planning to study physics (or are already doing so), there is nothing in this book that has to be “unlearned”: it is a complete description, accurate in every detail, of a framework onto which more advanced concepts can be attached without modification. For those of you who have already studied physics, it is a revelation of what you were really doing when you were making all those complicated calculations!
Feynman said:
This book purports to be a record of the lectures on quantum electrodynamics I gave at UCLA, transcribed and edited by my good friend Ralph Leighton. Actually, the manuscript has undergone considerable modification. Mr. Leighton’s experience in teaching and in writing was of considerable value in this attempt at presenting this central part of physics to a wider audience.

When I read those words again, your comment about the publishers came to my mind:
PeterDonis said:
Not to mention that it couldn't have been just Leighton: the publishers of the book had to know how it was written, and they listed Feynman as an author.
At least for QED, listing Feynman as one of the author was certainly mandatory. And here too, listing Feynman as one of the author was probably mandatory. In fact, Angela Collier says in the video that there are tape recordings (they are even on the internet), and that according to James Gleick the stories in the book roughly correspond to those tapes, but heavily filtered. Not listing Ralph Leighton as author is the fishy part. Both books appeared 1985, QED is probably the one which appeared first. Maybe not being listed as author on the stories book was Ralph's revenge for being denied authorship of QED?

PeterDonis said:
For a critique of her treatment of Feynman from someone who was in a much better position than she to know relevant facts, see this:

https://www.feynmanlectures.caltech...Baez_regarding_Angela_Colliers_sham_video.pdf
Michael Gottlieb is a friend of Ralph Leighton, and also has other conflicts of interest. Hence, it is unfortunate that he wrote: "In closing I will mention that Angela is making money from publishing this poisonous trash." Overall, my impression is that he is simply bad at coping with that stuff, but not acting in bad faith. I'm not convinced by his:
She gives false and misleading information about other books too, claiming, for example, that all the stories in Feynman’s autographical books are lies, without giving any basis for that claim, other than her speculations.
Angela did give evidence, for example from James Gleick and Murray Gell-Man. And his
the exercises were originally published in the 1960s, by Feynman and his coauthors
doesn't fact check either: Feynman was not listed as author when the excercises were originally published in the 1960s. But he is listed as author in Michael Gottlieb's publication. And of course, attacking John Baez before even trying to contact Angela Collier was not wise either:
John Baez (1/2):
... where Angela Collier will ruthlessly dissect the mythology he built around himself. You probably won't agree with everything she says, and you may hate some of it, but it will still be thought-provoking.
John Baez (2/2):
I was not implicitly endorsing all of @acollierastro's claims in my first post. I merely said what I wanted to say.

Nor am I implicitly endorsing Gottlieb's claims here. I hope Gottlieb and Collier can discuss this without using me as an intermediary.

Still, Michael Gottlieb somehow managed to convince me that Angela Collier guesses at the motivations of Ralph Leighton, Michael Gottlieb, and other "self appointed coauthors" of Feynman are off. What drove this home for me was
Blake C. Stacey
I'll go ahead and disagree with Collier's take on the *Feynman's Lost Lecture* book. The Goodsteins give *more* and *more accurate* credit than Feynman did.
(But her analysis of the stories in Leighton's book is not affected by this.)
 
Physics news on Phys.org
  • #102
gentzen said:
I was convinced that Leighton was a physicist and coauthor of the Feynman lectures.
Robert Leighton was a physicist and coauthor of the Feynman lectures. His son Ralph Leighton was not, nor did he ever claim to be. Again, this doesn't look to me like a case of anyone trying to deliberately mislead; it looks like a case of you not being very careful.

I'm not going to bother arguing any further about Angela Collier's claims. Both of us have given some references, and other readers can make up their own minds, and it's off topic for this thread anyway.
 
  • #103
Here is a 2023 paper relevant to this thread Many-Worlds: Why Is It Not the Consensus?

Abstract:​
In this paper, I argue that the many-worlds theory, even if it is arguably the mathematically most straightforward realist reading of quantum formalism, even if it is arguably local and deterministic, is not universally regarded as the best realist quantum theory because it provides a type of explanation that is not universally accepted. Since people disagree about what desiderata a satisfactory physical theory should possess, they also disagree about which explanatory schema one should look for in a theory, and this leads different people to different options.
 
  • Like
Likes bhobba, Fra and javisot
  • #104
Just a quick question to Ruta.

First, I always enjoy your posts, and this is a nice one, especially for helping out a newbie from a different area, sociology.

My question is, do you think QFT with the field considered real a realist interpretation?

Thanks
Bill
 
  • #105
bhobba said:
Just a quick question to Ruta.

First, I always enjoy your posts, and this is a nice one, especially for helping out a newbie from a different area, sociology.
And also to you :-)
bhobba said:
My question is, do you think QFT with the field considered real a realist interpretation?

Thanks
Bill
I'm not a philosopher, my philosophy colleague and coauthor Michael Silberstein handles these kinds of questions, but naively my answer is "yes". Many (most?) in foundations believe quantum fields are the fundamental building blocks of reality. I don't see how you can get any more "real" than that, but I may not appreciate the philosophical nuance.
 
  • Like
Likes ojitojuntos and bhobba
  • #106
Thank you for the replies in this thread! I'll check the various sources you've recommended, although it will probably take me a while haha
I also wanted to clarify that I'm aware that the laws of physics are not emergent in the same sense as social structures and dynamics are. When I mentioned that as a sociologist accepting stochasticity as inherent to reality was easier, I meant from a epistemic point of view, but I'm not equating social dynamics to physics; I know that these are very different areas of knowledge.

Related to this, I'm having some trouble understanding a couple of concepts: I understand that the wavefunction evolves deterministically once you have the measurement; however, what we observe is that, before the measurement, reality looks inherently probabilistic, and that this represents the measurement problem, which quantum interpretations try to solve, right?

Now, at the effective scale of human experience, even if we assume a probabilistic interpretation of quantum, does this make a difference? Or am I wrongly assuming a barrier between the quantum and classical?
 
  • #107
ojitojuntos said:
the wavefunction evolves deterministically once you have the measurement; however, what we observe is that, before the measurement, reality looks inherently probabilistic, and that this represents the measurement problem, which quantum interpretations try to solve, right?
I think you have it somewhat backwards.

In a typical quantum experiment, we prepare a system, it goes through some kind of process, and then we measure it. Preparing the system determines the starting wave function; the wave function then undergoes unitary evolution (which is deterministic) through the process in the middle, and only when we measure at the end do any probabilities come into play.

As an example, take the Stern-Gerlach experiment (or at least an idealized version of it). We prepare a spin-1/2 particle in a definite state, say spin-z up. Then we pass it through a Stern-Gerlach magnet oriented in, say, the x direction. Doing that induces a unitary (i.e., deterministic) evolution of the state. Then we measure the particle with a detector screen downstream of the S-G magnet: here we have one of two possible results, corresponding to two different places where the particle could hit the screen: one place corresponds to a measurement result of spin-x up, the other corresponds to a measurement result of spin-x down. There is a 50% probability of each result; that's the only place where probability comes into play at all.

The measurement problem, in the context of the experiment just described, is this: the state of the particle after it goes through the S-G magnet is a superposition of spin-x up and spin-x down (actually it's an entangled superposition, with the particle's spin being entangled with the direction of its momentum--the two different momentum directions point at the two different spots on the detector screen). How is it that when we measure the particle, we don't measure any such superposition, but instead, we measure either spin-x up or spin-x down? Or, to put it another way, why do we measure only one spot on the detector screen where the particle hits, instead of two? What is it about the screen that makes the particle just have one measurement result?
 
  • Love
  • Like
  • Wow
Likes PeroK, bhobba and ojitojuntos
  • #108
PeterDonis said:
Or, to put it another way, why do we measure only one spot on the detector screen where the particle hits, instead of two? What is it about the screen that makes the particle just have one measurement result?
Nutting things out for yourself is a great way to learn.

I could give my answer; however, central to this whole thing is something called Gleason's Theorem:
https://arxiv.org/pdf/quant-ph/9909073

Please take a moment to read it, put your thinking cap on, and see what emerges on the other side.

Post any thoughts here.

Thanks
Bill
 
Last edited:
  • #109
PeterDonis said:
The measurement problem, in the context of the experiment just described, is this: the state of the particle after it goes through the S-G magnet is a superposition of spin-x up and spin-x down (actually it's an entangled superposition, with the particle's spin being entangled with the direction of its momentum--the two different momentum directions point at the two different spots on the detector screen). How is it that when we measure the particle, we don't measure any such superposition, but instead, we measure either spin-x up or spin-x down? Or, to put it another way, why do we measure only one spot on the detector screen where the particle hits, instead of two? What is it about the screen that makes the particle just have one measurement result?
Here are options for answering questions like this from Allori's paper linked in post #103:
Some theories are what Einstein [3] called constructive theories. For one thing, these theories have a microscopic ontology, which constitute the building blocks of everything else. Constructive theories allow one to understand the phenomena compositionally and dynamically: macroscopic objects are composed of microscopic particles, and the macroscopic behavior is completely specified in terms of the microscopic dynamics. Therefore, the type of explanation these theories provide is bottom-up, rather than top-down. According to Einstein, there is another type of theory, which he dubbed principle theory. Theories of this type, also called kinematic theories, are formulated in terms of principles, which are used as constraints on physically possible processes: they exclude certain processes from physically happening. In this sense, principle theories are top-down: they explain the phenomena identifying constraints the phenomena need to obey to. They are ‘kinematic’ theories because the explanations they provide do not involve dynamical equations of motion and they do not depend on the interactions the system enters into. Instead, by definition, constructive theories involve dynamical reductions in macroscopic objects in terms of the motion and interactions of their microscopic three-dimensional constituents. Flores [4] argued that this distinction could be expanded in terms of framework theories, which deal with general constraints, and interaction theories, which explicitly invoke interactions. He thought that framework theories are principle theories while interaction theories include a larger set of theories than constructive theories. Furthermore, he connected framework theories with unification and interaction theories with mechanistic explanation (see also [5,6]).
 
  • #110
PeterDonis said:
The measurement problem, in the context of the experiment just described, is this: the state of the particle after it goes through the S-G magnet is a superposition of spin-x up and spin-x down (actually it's an entangled superposition, with the particle's spin being entangled with the direction of its momentum--the two different momentum directions point at the two different spots on the detector screen).
PeterDonis said:
Or, to put it another way, why do we measure only one spot on the detector screen where the particle hits, instead of two? What is it about the screen that makes the particle just have one measurement result?
My interpretation is that it's because the screen is just like the rest of us (an agent). Ie. If if we know the possible answers our decisions and behaviour reflect the uncertainty, but once we get the answer, it's precisely one of the possibilities, and our decisions align. I think the screen is no different.

So that gives the follow up question, is this look like it's all "ignorance", then why is bell inequality violated?

I think it's because in bells theorem, one assumes that the ignorance is agreed upon by all agents (an objective beable), thus the interactions between the agents should be possible to be described as an "average" of the mechanisms for each hidden value.

But it seems this idea is wrong - it seems to me uncertainty and ignorance is itself contextual. And when such contexts interact (ie two PARTS, or two AGENTS), quantum inferenece happens, that can't be explain in "classical terms"

/Fredrik
 
  • #111
Fra said:
the screen is just like the rest of us (an agent)
I don't see how this is a viable claim, since the screen does not exhibit any of the behaviors we exhibit as agents.

Fra said:
I think it's because in bells theorem, one assumes that the ignorance is agreed upon by all agents (an objective beable), thus the interactions between the agents should be possible to be described as an "average" of the mechanisms for each hidden value.

But it seems this idea is wrong - it seems to me uncertainty and ignorance is itself contextual. And when such contexts interact (ie two PARTS, or two AGENTS), quantum inferenece happens
I know this is the QM interpretations subforum, where the rules are a little broader, but still personal speculation is off limits here. You still need some kind of reference as a basis for making claims. Is this viewpoint proposed anywhere in the literature?
 
  • #112
I don't use the "agent" label as specific claim of the screen, I used it as a change of conceptual modelling perspective, as my opinon is that it is at the heart of the problem. Agent/observer or observed/matter is to me mainly a matter of perspective of inference (and not a ontological claim in any way). And as different agents can observer it each, and agent is just a normal physical system - see from an external perspective.

Normally the agent concept is understood from it's internal perspective/drive (such as decision making; though these could in principle be stochastic self-organisation, as in Baranders view, not it does not imply consciousness). Thus the agent perspective can used without making specific assumptions or speculations of its evolution or structure. It's a modelling perspective.

In contrast matter(screens) is described from an external perspective(external agent beeing a macroscopic laboratory with human scientists), in terms of it's state in a state space with dynamical laws. We can similarly think of this without making specific assumptions or knowledge of the exact full dynamics law. It's a modelling perspective.

As the problem here isn't that QM doesnt describe this, it's that even with the model explicity under our nose, we have trouble to understand it, intuitively. So when I interpret the screen to be just an agent, I meant I try imagine how a screen might perceive and responds to an incoming particle, when only the preparation procedure is known, and seek some intuition from that view.

Ie try to reflect of the measurement problem from this perspective. Just like we can say that an agent is just a physical system; then what does a "physical system" composed of interacting agents look like? This is not an excplicit claim, it is a change of perspective, that is not palatable to all, but I think if offers many insights, that is lacking from the system dynamics view. In particular when you try to think about the difference between a classical uncertainty and an uncertainty constructed from information that forces the agent to maintain non-commutative structure at a single time.

/Fredrik
 
  • #113
Fra said:
I don't use the "agent" label as specific claim of the screen, I used it as a change of conceptual modelling perspective,
Yes, but you aren't giving any references at all to any literature where this "perspective" is discussed. As I said, even in this subforum, we are discussing interpretations that are given in the literature, not people's own personal home-brewed interpretations. You seem to be describing the latter, and that's off topic here.
 
  • #114
PeterDonis said:
Yes, but you aren't giving any references at all to any literature where this "perspective" is discussed. As I said, even in this subforum, we are discussing interpretations that are given in the literature, not people's own personal home-brewed interpretations. You seem to be describing the latter, and that's off topic here.
As to the agent part, its partly originated from qbism. But the the perspective change I refer to sits at the mathematical modelling perspective, and is general, and not something I brewed, and is not itself an interpretation i think. I tried to add perspectives as it helped me at least, everyone can make up their own interpretations in their heads.

Some papers relating perspectives

Hydrodynamic Limits of non-Markovian Interacting Particle Systems on Sparse Graphs
https://arxiv.org/abs/2205.01587

An elementary proof of convergence to the mean-field equations for an epidemic model
https://arxiv.org/abs/1501.03250

System Dynamics versus Agent-Based Modeling: A Review of Complexity Simulation in Construction Waste Management
https://www.mdpi.com/2071-1050/10/7/2484

The general idea is that markovian agent based models converge in the mean field limit a timeless system dynamics, as the number of agents -> inf. But Non-markovian agent based models, would converge to time dependent laws, in the same limit. Interesting as they give rise to different kinds of "time" even. And the idea is also that agent based models are "larger" in that they can model things "system dynamics can't", because system dynamics represent a limiting case of agent based models.

But both models have pros and cons and can be used together. System dynamics is a top-down approch, that is constraint based. Agent based modelling is more computationally driven be decentralized rules (which correspondes to the causal mechanisms), and the constrains of SD have a correspondence of limits of agent based models. I think as is seem in QM in particular, the system dynamics level really seem to encrypt the causal mechanisms. It seems hard to understand what is going on. This I think is likely a general feature of the paradigm.

Agent based models would be comptutational intense, so it will not replace system dynamics. Its in a way easier to describe continous field than dense population of "parts" interaction. But if one wants to understand the interactions, between the parts (which is really what I think we are talking about in the experiments) the "system view" will hide this. So no wonder evolution in hilbert space seems hard to make sense of.

/Fredrik
 
  • #115
Fra said:
Some papers relating perspectives
Thanks, these are helpful.
 
  • #116
Fra said:
My interpretation is that it's because the screen is just like the rest of us (an agent). Ie. If if we know the possible answers our decisions and behaviour reflect the uncertainty, but once we get the answer, it's precisely one of the possibilities, and our decisions align. I think the screen is no different.
Fra said:
I don't use the "agent" label as specific claim of the screen, I used it as a change of conceptual modelling perspective, as my opinon is that it is at the heart of the problem. Agent/observer or observed/matter is to me mainly a matter of perspective of inference (and not a ontological claim in any way). And as different agents can observer it each, and agent is just a normal physical system - see from an external perspective.
So you label the screen as agent, because it plays the role of observer in your modeling of the S-G experiment? I don‘t like this way of dropping the distinction between observer and agent. An observer suggest something passive, like the screen. An agent suggest something more active, like an information gathering and using system (IGUS).
You should at least clarify how the screen is utilizing the information, if you want to label it as an agent. Is it using the information to store it for later retrieval, like a hard disk? Or on the other side, for transforming itself, like in a nanofabrication process?
 
  • #117
gentzen said:
So you label the screen as agent, because it plays the role of observer in your modeling of the S-G experiment? I don‘t like this way of dropping the distinction between observer and agent. An observer suggest something passive, like the screen.
I agree. But the passive nature of the screen is an approximation that is valid just because its huge.

Fundamentally passive observer is a fiction to me. In practice passive observer is a limiting case.

But once you consider the actual limit. I loose track of explanations and how dynamical law emerge.
gentzen said:
An agent suggest something more active, like an information gathering and using system (IGUS).
You should at least clarify how the screen is utilizing the information, if you want to label it as an agent. Is it using the information to store it for later retrieval, like a hard disk? Or on the other side, for transforming itself, like in a nanofabrication process?
the microstate of a macroscopic screen is itself able to encode information ~ memory.

The internal physical processes in the screen and its interaction with the environment is the only "information processing" we need.

So to understand this "agent" is of coursr indistinguishable from understanding the in depth microstate and physical interactions of matter. Its only the perspective that differs.

also the agent is not not unique, just ad one screen can be thought of as beeinh made out of atoms with relations. An agent can been seen as a group of microagents.

/Fredrrik
 
  • #118
Fra said:
also the agent is not not unique, just ad one screen can be thought of as beeinh made out of atoms with relations. An agent can been seen as a group of microagents.
That is beside the point. After one has setup a model for a specific physical situation, the roles are fixed. Procrastinating over the fact that one could also have setup the roles and the model differently is not helpful. It even risks to confuse object level facts with meta-level stuff like:
The action to “acquire” devices on the other hand is on a kind of meta-level, which is of limited help for discussions of how to provide physical meanings and their connection to the formalism.
 
  • #119
gentzen said:
That is beside the point. After one has setup a model for a specific physical situation, the roles are fixed. Procrastinating over the fact that one could also have setup the roles and the model differently is not helpful. It even risks to confuse object level facts with meta-level stuff like:
I get your point, it does raise problems!

Also in the normal paradigm, we shouldnt confuse them.

But here we try to probe deeper I think this confusion is real and not only meta stuff. So my perspective is to accept the problems(including confusion) of what are object level facts and how those are contextual howto find an objective context in some limit. (Ie macroscopic reality or "classical world").

We might disagree on this.

/Fredrik
 
  • #120
Hello guys. OP again. I appreciate the discussion and thorough explanations for a layman. I have one more question about the measurement problem:
If randomness arises at measurement, and we can’t pinpoint how the collapse occurs, why is it that most physicists (according to polls I’ve seen online) consider that reality is fundamentally probabilistic, instead of deterministic, but with epistemic uncertainty?
Is this correct? Or, in a simpler sense, do experimental results and the math seem to lean more towards fundamental probabilities?
 
  • #121
ojitojuntos said:
If randomness arises at measurement, and we can’t pinpoint how the collapse occurs, why is it that most physicists (according to polls I’ve seen online) consider that reality is fundamentally probabilistic, instead of deterministic, but with epistemic uncertainty?
Is this correct? Or, in a simpler sense, do experimental results and the math seem to lean more towards fundamental probabilities?
Yes, this is the basic message of when experiment and QM violates Bells inqeuality.

Given some assumptions (that follow from a basic classical picture), if the results are only epistemic (in the sense of beeing due to physicists ignorance) then the inequality must hold - but it doesn't! That is the problem with the idea "deterministic, but with epistemic uncertainty".

/Fredrik
 
  • Like
Likes ojitojuntos and gentzen
  • #122
Fra said:
Given some assumptions (that follow from a basic classical picture), if the results are only epistemic (in the sense of beeing due to physicists ignorance) then the inequality must hold
But there are QM interpretations where the results are epistemic and the assumptions you refer to are violated. For example, the Bohmian interpretation, in which the probabilities are purely due to our ignorance of the actual particle positions.
 
  • #123
Fra said:
this is the basic message of when experiment and QM violates Bells inqeuality.
Not quite. It's true that "deterministic, but with epistemic uncertainty" forces you to accept something like the Bohmian interpretation.

But "fundamentally probabilistic" forces you to accept that even though there is no way even in principle to predict in advance what the experimental results will be, the results for entangled particles measured at distant locations still have to obey the constraints imposed by the overall quantum state of the system. For example, measurements of spin around the same axis on two entangled qubits in the singlet state will always give opposite results. That always is what makes it very hard to see how a "fundamentally probabilistic" underlying physics could work--how could it possibly guarantee such a result every time?

In short, the real "basic message of when experiment and QM violates Bells inqeuality" is that nobody has a good intuitive picture of what's going on. There is no interpretation that doesn't force you to accept something that seems deeply problematic.
 
  • #124
ojitojuntos said:
Hello guys. OP again. I appreciate the discussion and thorough explanations for a layman. I have one more question about the measurement problem:
If randomness arises at measurement, and we can’t pinpoint how the collapse occurs, why is it that most physicists (according to polls I’ve seen online) consider that reality is fundamentally probabilistic, instead of deterministic, but with epistemic uncertainty?
Is this correct? Or, in a simpler sense, do experimental results and the math seem to lean more towards fundamental probabilities?
This thread has highlighted that ultimately it is perhaps a matter of personal taste whether "nature is fundamentally probabilistic" or not. Let's go back to the simple example of a single radioactive atom. Taken at face value, there is nothing in the description of the atomic state that determines when the atom will decay. That is nature being probabilistic at the fundamental level.

However, saying that the atom decays entails the complication of measurement and a suitable measurement apparatus. And, you could claim that if the state of everything in the experiment was known, then you would know in advance when the atom was measured to decay. And, no one can disprove this claim.

Moreover, given the complexity of a macroscopic measurement device, it's practically (and perhaps even theoretically) impossible to know its precise state. You would need to start by measuring the measuring device - entailing a much more extensive measurement problem.

I can't speak for professional physicists, but my instinct is to accept the first (probabiltistic) analysis. It feels closer to what nature is telling us. The second analysis seems to impose our thinking on nature. That, ultimately, no matter how loudly nature appears to be telling us that it's fundamentally probabiltistic, we appeal to an inately human demand for determinism to explain away the apparent probabilities. And demand that under it all there is actually pure determinism at work.
 
  • Like
Likes ojitojuntos, martinbn and renormalize
  • #125
PeterDonis said:
Not quite. It's true that "deterministic, but with epistemic uncertainty" forces you to accept something like the Bohmian interpretation.
PeterDonis said:
In short, the real "basic message of when experiment and QM violates Bells inqeuality" is that nobody has a good intuitive picture of what's going on. There is no interpretation that doesn't force you to accept something that seems deeply problematic.
Fair enough, but I didn't even count Bohmian mechanics, as it introduces so many new issues that are far more impalatable to me that the original problem, seeking increasingly more "improbable" loopholes etc :nb)

I only pay attention to it when Demystifier has a "bad day" and presents it like this.

/Fredrik
 
  • #126
Fra said:
I didn't even count Bohmian mechanics, as it introduces so many new issues that are far more impalatable to me that the original problem
As I said, every QM interpretation has features that are unpalatable. It's just a question of what kinds of unpalatability you prefer to accept.
 
  • #127
ojitojuntos said:
Hello guys. OP again. I appreciate the discussion and thorough explanations for a layman. I have one more question about the measurement problem:
If randomness arises at measurement, and we can’t pinpoint how the collapse occurs, why is it that most physicists (according to polls I’ve seen online) consider that reality is fundamentally probabilistic, instead of deterministic, but with epistemic uncertainty?
Is this correct? Or, in a simpler sense, do experimental results and the math seem to lean more towards fundamental probabilities?
The following text excerpt may be helpful (from: Rudolphina - Universität Wien https://rudolphina.univie.ac.at/en/quantum-physics-demands-a-new-understanding-of-reality):

What is the difference between probabilistic and deterministic descriptions?

Classical physics describes the world in a deterministic way—which means that we can predict outcomes by thinking about how events would certainly unfold under ideal conditions. Probabilistic descriptions, on the other hand, are only able to say how probable a given measured result is. Classical physics also works with probabilistic descriptions, "but these are merely an expression of the fact that we do not know the true circumstances," says Časlav Brukner. In other words, probabilities in classical physics merely reflect our ignorance. "Yet this is not the case in quantum physics, where probabilities occur in a fundamental, non-reducible way—there is no deterministic cause behind them." This means that the world, at its core, is indeterminate.
 
  • #128
Fra said:
Fair enough, but I didn't even count Bohmian mechanics, as it introduces so many new issues that are far more impalatable to me that the original problem, seeking increasingly more "improbable" loopholes etc :nb)

I only pay attention to it when Demystifier has a "bad day" and presents it like this.

/Fredrik
Which features of Bohm do you find impalatable, out of interest?
 
  • #129
PeterDonis said:
There is no interpretation that doesn't force you to accept something that seems deeply problematic.

And many, like me, think that is because we have no DIRECT experience with the quantum world. It is tough to apply how we think about the world, shaped by experience in the macro world, to something we have no experience of.

Even further, the Effective Field Theory approach to QFT seems to be saying, accepting principles such as cluster decomposition, concepts rooted in direct experience of the everyday world, for regions we can currently probe, (ie including the no direct experience part) ,things can't be other than what QM says:

https://en.wikipedia.org/wiki/Effective_field_theory
Steven Weinberg's "folk theorem" stipulates how to build an effective field theory that is well behaved. The "theorem" states that the most general Lagrangian that is consistent with the symmetries of the low energy theory can be rendered into an effective field theory at low energies that respects the symmetries and respects unitarity, analyticity, and cluster decomposition.

Its physical content seems to be in abstract mathematical concepts like symmetry and constants that must be put in by hand - we have zero idea where they come from (as of now)

What was the title of that crazy 1963 movie - It's a Mad, Mad, Mad, Mad World.

As far as meaning goes, even the so-called Measurement problem is still debated after all these years:


Thanks
Bill
 
  • #130
bhobba said:
many, like me, think that is because we have no DIRECT experience with the quantum world
It's not just that we have no direct experience with the quantum world. It's that we do have tons of direct experience of a world that is not quantum--that behaves classically. Yet there doesn't seem to be any way to get that classical world out of QM without having to believe something very unpalatable.

bhobba said:
the Effective Field Theory approach to QFT seems to be saying, accepting principles such as cluster decomposition, concepts rooted in direct experience of the everyday world, for regions we can currently probe, (ie including the no direct experience part) ,things can't be other than what QM says
Only if you interpret those principles very loosely.

For example, take "cluster decomposition". There are different statements of it in the literature, but basically it boils down to, distant experiments can't influence each other. Which sounds fine until you run into Bell inequality violations and other counterintuitive results of experiments on entangled particles--for example, if you and I each measure one of a pair of entangled qubits in the singlet state, you're on Earth and I'm on a planet circling Alpha Centauri, and our measurements are both along the same spin axis, we must get opposite results. How in tarnation can that happen if the measurements can't influence each other?

The usual answer involves words like "nonlocality", but that's not actually an answer, it's just a restatement of the problem. Nobody has a good answer to how it can be like that. We have a very good answer as to what theoretical framework to use to make predictions--yes, at sufficiently low energies, that's going to be a QFT of one form or another, as the effective field theory approach says. But as for what's going on "under the hood" that makes those QFT predictions work out? Nobody has a good answer.
 
  • Love
  • Like
Likes javisot and bhobba
  • #131
iste said:
Which features of Bohm do you find impalatable, out of interest?
One issue is going from basic non-relativistic QM to QFT.

Our own Demystifier examines some of the issues:
https://arxiv.org/abs/2205.05986

Thanks
Bill
 
  • #132
PeterDonis said:
The usual answer involves words like "nonlocality", but that's not actually an answer, it's just a restatement of the problem. Nobody has a good answer to how it can be like that.

Aside from the math, of course—that explains it no problem—but 'shut up and calculate' seems to many just a copout (I am one of those many).

As usual, excellent post, Peter—cudos.

I only want to mention I prefer factorizability to locality, as QM is Bell non-local rather than non-local in the usual sense:
https://plato.stanford.edu/entries/bell-theorem/
'The principal condition used to derive Bell inequalities is a condition that may be called Bell locality, or factorizability. It is, roughly, the condition that any correlations between distant events be explicable in local terms, as due to states of affairs at the common source of the particles upon which the experiments are performed.'

I wince at some of my past posts on this, before the distinction penetrated my thick skull (your posts were part of finally understanding this)

Thanks
Bill
 
  • #133
bhobba said:
I prefer factorizability to locality
Note, though, that factorizability is how we would intuitively express the cluster decomposition principle. So this doesn't really help as far as our intuitions are concerned.

In terms of mathematical precision, of course, factorizability wins since "locality" is vague and can be interpreted different ways.
 
  • #134
  • #135
bhobba said:
From what I can see, the article (and its first part--what you linked to is the second part) talks about QBism and consistent histories.

QBism is fine as far as it goes--but it doesn't go very far.

QBism says that the quantum state isn't physically real; it's not a direct representation of the actual, physical state of the system. QBism says that the probabilities in QM are Bayesian--they're descriptions of our state of knowledge about whatever physical system we're trying to make predictions about.

But QBism doesn't say anything about what the actual, physical state of the system is. But that's what most people seem to want from a QM interpretation, QBism doesn't give that.

Consistent histories, from what I can tell, is just restating decoherence theory, and then waffling about whether that amounts to there really being just one history (meaning there's an actual collapse somewhere) or not (meaning no collapse and many worlds).
 
  • #136
PeterDonis said:
n terms of mathematical precision, of course, factorizability wins since "locality" is vague and can be interpreted different ways.

For those following along, you may be wondering how Weinberg accepted the cluster decomposition property, who, without doubt, was aware of Bell.

I can't give the details (page number, etc). Still, if I recall correctly, Weinberg addresses this early on in his justly famous Quantum Theory of Fields (great as a reference but not good for a first exposue - for that, if you are mathematically advanced enough to know some functional analysis, I suggest 'WHAT IS A QUANTUM FIELD THEORY? A First Introduction for Mathematicians by MICHEL TALAGRAND which I am currently studying - just basic HS QM required, believe it or not).

Anyway, here is a recent take on it:
https://arxiv.org/html/2501.12018v1

Thanks
Bill
 
  • #137
PeterDonis said:
Consistent histories, from what I can tell, is just restating decoherence theory, and then waffling about whether that amounts to there really being just one history (meaning there's an actual collapse somewhere) or not (meaning no collapse and many worlds).

For those who want to investigate consistent histories further:
https://quantum.phys.cmu.edu/CHS/histories.html

Griffiths (no, not the Griffiths of the standard EM textbook fame) has kindly made his textbook available for free.

Thanks
Bill
 
  • #138
bhobba said:
here is a recent take on it
This paper basically seems to be saying that the correlations that break factorizability don't vanish when you look at the correct measurement operators--the ones that describe measurements at the spatially separated locations where they're actually made.
 
  • #139
PeterDonis said:
This paper basically seems to be saying that the correlations that break factorizability don't vanish when you look at the correct measurement operators--the ones that describe measurements at the spatially separated locations where they're actually made.

Yes.

But it says that the further separated they are, the more observations are needed to detect that they cannot be factored. I take cluster decomposition to mean experiments can always be separated far enough that, for all practical purposes, it is true.

'Nevertheless, the larger the spatial separation, the greater the amount of needed experimental data might become in order to make a violation of the Bell’s inequality visible.'

Thanks
Bill
 
  • #140
bhobba said:
it says that the further separated they are, the more observations are needed to detect that they cannot be factored
This would need to be experimentally tested. They don't give any specific numbers, but there are experiments showing Bell inequality violations in measurements at, IIRC, kilometer distances now.
 
  • #141
bhobba said:
I take cluster decomposition to mean experiments can always be separated far enough that, for all practical purposes, it is true.
That isn't the way cluster decomposition is presented in the literature, though. There's no claim that it breaks down for measurements that are spacelike separated, but not far enough.

I'm also not sure that cluster decomposition is really a necessary principle. The really necessary principle, I think, is that spacelike separated measurements have to commute--because their time ordering is not invariant, so it can't matter which one is done first. QFT obeys that principle exactly; Bell inequality violations don't violate it.
 
  • #142
PeterDonis said:
How in tarnation can that happen if the measurements can't influence each other?

The usual answer involves words like "nonlocality", but that's not actually an answer, it's just a restatement of the problem. Nobody has a good answer to how it can be like that. We have a very good answer as to what theoretical framework to use to make predictions--yes, at sufficiently low energies, that's going to be a QFT of one form or another, as the effective field theory approach says. But as for what's going on "under the hood" that makes those QFT predictions work out? Nobody has a good answer.
I once asked about this, but the question wasn't understood. For example, in the case you present, does anything prevent one particle from being in one state and the other in the opposite state by pure chance?

Suppose the above happens every time we measure—that is, every time we measure a particle it has spin down, and when we measure the other particle it always has spin up—can't it just happen by chance, without any intervening interaction? Does something in QM prevent that from happening and there must be an interaction (local or non-local)?
 
Last edited:
  • #143
javisot said:
does anything prevent one particle from being in one state and the other in the opposite state by pure chance?
"Pure chance" wouldn't make it happen every single time.

javisot said:
can't it just happen by chance
No, because "chance" would mean you would get different results on different runs. That's the definition of "chance". If you get opposite results every single time, that's not "chance".
 
  • #144
javisot said:
without any intervening interaction?
QM doesn't say there is an "interaction" between the two entangled particles. Certain QM interpretations do, but not all of them.
 
  • #145
PeterDonis said:
"Pure chance" wouldn't make it happen every single time.


No, because "chance" would mean you would get different results on different runs. That's the definition of "chance". If you get opposite results every single time, that's not "chance".
Does any theorem or principle prevent it, or do you intuitively answer that it's not possible?. I also believe it's not possible, but I was wondering if something in the QM formalism prevents it.
 
  • #146
javisot said:
Does any theorem or principle prevent it
The definition of "chance" prevents it. That isn't something specific to QM.
 
  • #147
javisot said:
Does any theorem or principle prevent it, or do you intuitively answer that it's not possible?. I also believe it's not possible, but I was wondering if something in the QM formalism prevents it.
How about Cournot’s principle?

See here for some books, presentations, and articles where it is discussed:
gentzen said:
And also “Scientific Reasoning : The Bayesian Approach” by Colin Howson and Peter Urbach (2006) ...
One other interesting discussion point in that book was that Cournot’s principle is inconcistent (or at least wrong), because in some situation any event which can happen has a very small probability. Glenn Shafer proposes to fix this by replacing “practical certainty” with “prediction”. He may be right. After all, I mostly learned about Cournot’s principle from his Why did Cournot’s principle disappear? and “That’s what all the old guys said.” The many faces of Cournot’s principle. Another possible fix could be to evaluate smallness of probabilities relative to the entropy of the given situations.
 
Back
Top