I Question about discussions around quantum interpretations

  • #101
PeterDonis said:
Oh, for goodness' sake. I'm sorry to sound blunt here, but if you feel cheated because you didn't think to look at the inside cover of the book before buying it, to me that's on you, not Feynman or Ralph Leighton.
I didn't buy the book, I read the copy from a public library. I don't remember whether I noticed that actually Leighton was the author. But I certainly remember that I was convinced that Leighton was a physicist and coauthor of the Feynman lectures. Maybe Preface and Acknowledgment of "QED: The Strange Theory of Light and Matter" were responsible for this:
Leighton said:
If you are planning to study physics (or are already doing so), there is nothing in this book that has to be “unlearned”: it is a complete description, accurate in every detail, of a framework onto which more advanced concepts can be attached without modification. For those of you who have already studied physics, it is a revelation of what you were really doing when you were making all those complicated calculations!
Feynman said:
This book purports to be a record of the lectures on quantum electrodynamics I gave at UCLA, transcribed and edited by my good friend Ralph Leighton. Actually, the manuscript has undergone considerable modification. Mr. Leighton’s experience in teaching and in writing was of considerable value in this attempt at presenting this central part of physics to a wider audience.

When I read those words again, your comment about the publishers came to my mind:
PeterDonis said:
Not to mention that it couldn't have been just Leighton: the publishers of the book had to know how it was written, and they listed Feynman as an author.
At least for QED, listing Feynman as one of the author was certainly mandatory. And here too, listing Feynman as one of the author was probably mandatory. In fact, Angela Collier says in the video that there are tape recordings (they are even on the internet), and that according to James Gleick the stories in the book roughly correspond to those tapes, but heavily filtered. Not listing Ralph Leighton as author is the fishy part. Both books appeared 1985, QED is probably the one which appeared first. Maybe not being listed as author on the stories book was Ralph's revenge for being denied authorship of QED?

PeterDonis said:
For a critique of her treatment of Feynman from someone who was in a much better position than she to know relevant facts, see this:

https://www.feynmanlectures.caltech...Baez_regarding_Angela_Colliers_sham_video.pdf
Michael Gottlieb is a friend of Ralph Leighton, and also has other conflicts of interest. Hence, it is unfortunate that he wrote: "In closing I will mention that Angela is making money from publishing this poisonous trash." Overall, my impression is that he is simply bad at coping with that stuff, but not acting in bad faith. I'm not convinced by his:
She gives false and misleading information about other books too, claiming, for example, that all the stories in Feynman’s autographical books are lies, without giving any basis for that claim, other than her speculations.
Angela did give evidence, for example from James Gleick and Murray Gell-Man. And his
the exercises were originally published in the 1960s, by Feynman and his coauthors
doesn't fact check either: Feynman was not listed as author when the excercises were originally published in the 1960s. But he is listed as author in Michael Gottlieb's publication. And of course, attacking John Baez before even trying to contact Angela Collier was not wise either:
John Baez (1/2):
... where Angela Collier will ruthlessly dissect the mythology he built around himself. You probably won't agree with everything she says, and you may hate some of it, but it will still be thought-provoking.
John Baez (2/2):
I was not implicitly endorsing all of @acollierastro's claims in my first post. I merely said what I wanted to say.

Nor am I implicitly endorsing Gottlieb's claims here. I hope Gottlieb and Collier can discuss this without using me as an intermediary.

Still, Michael Gottlieb somehow managed to convince me that Angela Collier guesses at the motivations of Ralph Leighton, Michael Gottlieb, and other "self appointed coauthors" of Feynman are off. What drove this home for me was
Blake C. Stacey
I'll go ahead and disagree with Collier's take on the *Feynman's Lost Lecture* book. The Goodsteins give *more* and *more accurate* credit than Feynman did.
(But her analysis of the stories in Leighton's book is not affected by this.)
 
Physics news on Phys.org
  • #102
gentzen said:
I was convinced that Leighton was a physicist and coauthor of the Feynman lectures.
Robert Leighton was a physicist and coauthor of the Feynman lectures. His son Ralph Leighton was not, nor did he ever claim to be. Again, this doesn't look to me like a case of anyone trying to deliberately mislead; it looks like a case of you not being very careful.

I'm not going to bother arguing any further about Angela Collier's claims. Both of us have given some references, and other readers can make up their own minds, and it's off topic for this thread anyway.
 
  • #103
Here is a 2023 paper relevant to this thread Many-Worlds: Why Is It Not the Consensus?

Abstract:​
In this paper, I argue that the many-worlds theory, even if it is arguably the mathematically most straightforward realist reading of quantum formalism, even if it is arguably local and deterministic, is not universally regarded as the best realist quantum theory because it provides a type of explanation that is not universally accepted. Since people disagree about what desiderata a satisfactory physical theory should possess, they also disagree about which explanatory schema one should look for in a theory, and this leads different people to different options.
 
  • Like
Likes bhobba, Fra and javisot
  • #104
Just a quick question to Ruta.

First, I always enjoy your posts, and this is a nice one, especially for helping out a newbie from a different area, sociology.

My question is, do you think QFT with the field considered real a realist interpretation?

Thanks
Bill
 
  • #105
bhobba said:
Just a quick question to Ruta.

First, I always enjoy your posts, and this is a nice one, especially for helping out a newbie from a different area, sociology.
And also to you :-)
bhobba said:
My question is, do you think QFT with the field considered real a realist interpretation?

Thanks
Bill
I'm not a philosopher, my philosophy colleague and coauthor Michael Silberstein handles these kinds of questions, but naively my answer is "yes". Many (most?) in foundations believe quantum fields are the fundamental building blocks of reality. I don't see how you can get any more "real" than that, but I may not appreciate the philosophical nuance.
 
  • Like
Likes ojitojuntos and bhobba
  • #106
Thank you for the replies in this thread! I'll check the various sources you've recommended, although it will probably take me a while haha
I also wanted to clarify that I'm aware that the laws of physics are not emergent in the same sense as social structures and dynamics are. When I mentioned that as a sociologist accepting stochasticity as inherent to reality was easier, I meant from a epistemic point of view, but I'm not equating social dynamics to physics; I know that these are very different areas of knowledge.

Related to this, I'm having some trouble understanding a couple of concepts: I understand that the wavefunction evolves deterministically once you have the measurement; however, what we observe is that, before the measurement, reality looks inherently probabilistic, and that this represents the measurement problem, which quantum interpretations try to solve, right?

Now, at the effective scale of human experience, even if we assume a probabilistic interpretation of quantum, does this make a difference? Or am I wrongly assuming a barrier between the quantum and classical?
 
  • #107
ojitojuntos said:
the wavefunction evolves deterministically once you have the measurement; however, what we observe is that, before the measurement, reality looks inherently probabilistic, and that this represents the measurement problem, which quantum interpretations try to solve, right?
I think you have it somewhat backwards.

In a typical quantum experiment, we prepare a system, it goes through some kind of process, and then we measure it. Preparing the system determines the starting wave function; the wave function then undergoes unitary evolution (which is deterministic) through the process in the middle, and only when we measure at the end do any probabilities come into play.

As an example, take the Stern-Gerlach experiment (or at least an idealized version of it). We prepare a spin-1/2 particle in a definite state, say spin-z up. Then we pass it through a Stern-Gerlach magnet oriented in, say, the x direction. Doing that induces a unitary (i.e., deterministic) evolution of the state. Then we measure the particle with a detector screen downstream of the S-G magnet: here we have one of two possible results, corresponding to two different places where the particle could hit the screen: one place corresponds to a measurement result of spin-x up, the other corresponds to a measurement result of spin-x down. There is a 50% probability of each result; that's the only place where probability comes into play at all.

The measurement problem, in the context of the experiment just described, is this: the state of the particle after it goes through the S-G magnet is a superposition of spin-x up and spin-x down (actually it's an entangled superposition, with the particle's spin being entangled with the direction of its momentum--the two different momentum directions point at the two different spots on the detector screen). How is it that when we measure the particle, we don't measure any such superposition, but instead, we measure either spin-x up or spin-x down? Or, to put it another way, why do we measure only one spot on the detector screen where the particle hits, instead of two? What is it about the screen that makes the particle just have one measurement result?
 
  • Love
  • Like
  • Wow
Likes PeroK, bhobba and ojitojuntos
  • #108
PeterDonis said:
Or, to put it another way, why do we measure only one spot on the detector screen where the particle hits, instead of two? What is it about the screen that makes the particle just have one measurement result?
Nutting things out for yourself is a great way to learn.

I could give my answer; however, central to this whole thing is something called Gleason's Theorem:
https://arxiv.org/pdf/quant-ph/9909073

Please take a moment to read it, put your thinking cap on, and see what emerges on the other side.

Post any thoughts here.

Thanks
Bill
 
Last edited:
  • #109
PeterDonis said:
The measurement problem, in the context of the experiment just described, is this: the state of the particle after it goes through the S-G magnet is a superposition of spin-x up and spin-x down (actually it's an entangled superposition, with the particle's spin being entangled with the direction of its momentum--the two different momentum directions point at the two different spots on the detector screen). How is it that when we measure the particle, we don't measure any such superposition, but instead, we measure either spin-x up or spin-x down? Or, to put it another way, why do we measure only one spot on the detector screen where the particle hits, instead of two? What is it about the screen that makes the particle just have one measurement result?
Here are options for answering questions like this from Allori's paper linked in post #103:
Some theories are what Einstein [3] called constructive theories. For one thing, these theories have a microscopic ontology, which constitute the building blocks of everything else. Constructive theories allow one to understand the phenomena compositionally and dynamically: macroscopic objects are composed of microscopic particles, and the macroscopic behavior is completely specified in terms of the microscopic dynamics. Therefore, the type of explanation these theories provide is bottom-up, rather than top-down. According to Einstein, there is another type of theory, which he dubbed principle theory. Theories of this type, also called kinematic theories, are formulated in terms of principles, which are used as constraints on physically possible processes: they exclude certain processes from physically happening. In this sense, principle theories are top-down: they explain the phenomena identifying constraints the phenomena need to obey to. They are ‘kinematic’ theories because the explanations they provide do not involve dynamical equations of motion and they do not depend on the interactions the system enters into. Instead, by definition, constructive theories involve dynamical reductions in macroscopic objects in terms of the motion and interactions of their microscopic three-dimensional constituents. Flores [4] argued that this distinction could be expanded in terms of framework theories, which deal with general constraints, and interaction theories, which explicitly invoke interactions. He thought that framework theories are principle theories while interaction theories include a larger set of theories than constructive theories. Furthermore, he connected framework theories with unification and interaction theories with mechanistic explanation (see also [5,6]).
 
  • #110
PeterDonis said:
The measurement problem, in the context of the experiment just described, is this: the state of the particle after it goes through the S-G magnet is a superposition of spin-x up and spin-x down (actually it's an entangled superposition, with the particle's spin being entangled with the direction of its momentum--the two different momentum directions point at the two different spots on the detector screen).
PeterDonis said:
Or, to put it another way, why do we measure only one spot on the detector screen where the particle hits, instead of two? What is it about the screen that makes the particle just have one measurement result?
My interpretation is that it's because the screen is just like the rest of us (an agent). Ie. If if we know the possible answers our decisions and behaviour reflect the uncertainty, but once we get the answer, it's precisely one of the possibilities, and our decisions align. I think the screen is no different.

So that gives the follow up question, is this look like it's all "ignorance", then why is bell inequality violated?

I think it's because in bells theorem, one assumes that the ignorance is agreed upon by all agents (an objective beable), thus the interactions between the agents should be possible to be described as an "average" of the mechanisms for each hidden value.

But it seems this idea is wrong - it seems to me uncertainty and ignorance is itself contextual. And when such contexts interact (ie two PARTS, or two AGENTS), quantum inferenece happens, that can't be explain in "classical terms"

/Fredrik
 
  • #111
Fra said:
the screen is just like the rest of us (an agent)
I don't see how this is a viable claim, since the screen does not exhibit any of the behaviors we exhibit as agents.

Fra said:
I think it's because in bells theorem, one assumes that the ignorance is agreed upon by all agents (an objective beable), thus the interactions between the agents should be possible to be described as an "average" of the mechanisms for each hidden value.

But it seems this idea is wrong - it seems to me uncertainty and ignorance is itself contextual. And when such contexts interact (ie two PARTS, or two AGENTS), quantum inferenece happens
I know this is the QM interpretations subforum, where the rules are a little broader, but still personal speculation is off limits here. You still need some kind of reference as a basis for making claims. Is this viewpoint proposed anywhere in the literature?
 
  • #112
I don't use the "agent" label as specific claim of the screen, I used it as a change of conceptual modelling perspective, as my opinon is that it is at the heart of the problem. Agent/observer or observed/matter is to me mainly a matter of perspective of inference (and not a ontological claim in any way). And as different agents can observer it each, and agent is just a normal physical system - see from an external perspective.

Normally the agent concept is understood from it's internal perspective/drive (such as decision making; though these could in principle be stochastic self-organisation, as in Baranders view, not it does not imply consciousness). Thus the agent perspective can used without making specific assumptions or speculations of its evolution or structure. It's a modelling perspective.

In contrast matter(screens) is described from an external perspective(external agent beeing a macroscopic laboratory with human scientists), in terms of it's state in a state space with dynamical laws. We can similarly think of this without making specific assumptions or knowledge of the exact full dynamics law. It's a modelling perspective.

As the problem here isn't that QM doesnt describe this, it's that even with the model explicity under our nose, we have trouble to understand it, intuitively. So when I interpret the screen to be just an agent, I meant I try imagine how a screen might perceive and responds to an incoming particle, when only the preparation procedure is known, and seek some intuition from that view.

Ie try to reflect of the measurement problem from this perspective. Just like we can say that an agent is just a physical system; then what does a "physical system" composed of interacting agents look like? This is not an excplicit claim, it is a change of perspective, that is not palatable to all, but I think if offers many insights, that is lacking from the system dynamics view. In particular when you try to think about the difference between a classical uncertainty and an uncertainty constructed from information that forces the agent to maintain non-commutative structure at a single time.

/Fredrik
 
  • #113
Fra said:
I don't use the "agent" label as specific claim of the screen, I used it as a change of conceptual modelling perspective,
Yes, but you aren't giving any references at all to any literature where this "perspective" is discussed. As I said, even in this subforum, we are discussing interpretations that are given in the literature, not people's own personal home-brewed interpretations. You seem to be describing the latter, and that's off topic here.
 
  • #114
PeterDonis said:
Yes, but you aren't giving any references at all to any literature where this "perspective" is discussed. As I said, even in this subforum, we are discussing interpretations that are given in the literature, not people's own personal home-brewed interpretations. You seem to be describing the latter, and that's off topic here.
As to the agent part, its partly originated from qbism. But the the perspective change I refer to sits at the mathematical modelling perspective, and is general, and not something I brewed, and is not itself an interpretation i think. I tried to add perspectives as it helped me at least, everyone can make up their own interpretations in their heads.

Some papers relating perspectives

Hydrodynamic Limits of non-Markovian Interacting Particle Systems on Sparse Graphs
https://arxiv.org/abs/2205.01587

An elementary proof of convergence to the mean-field equations for an epidemic model
https://arxiv.org/abs/1501.03250

System Dynamics versus Agent-Based Modeling: A Review of Complexity Simulation in Construction Waste Management
https://www.mdpi.com/2071-1050/10/7/2484

The general idea is that markovian agent based models converge in the mean field limit a timeless system dynamics, as the number of agents -> inf. But Non-markovian agent based models, would converge to time dependent laws, in the same limit. Interesting as they give rise to different kinds of "time" even. And the idea is also that agent based models are "larger" in that they can model things "system dynamics can't", because system dynamics represent a limiting case of agent based models.

But both models have pros and cons and can be used together. System dynamics is a top-down approch, that is constraint based. Agent based modelling is more computationally driven be decentralized rules (which correspondes to the causal mechanisms), and the constrains of SD have a correspondence of limits of agent based models. I think as is seem in QM in particular, the system dynamics level really seem to encrypt the causal mechanisms. It seems hard to understand what is going on. This I think is likely a general feature of the paradigm.

Agent based models would be comptutational intense, so it will not replace system dynamics. Its in a way easier to describe continous field than dense population of "parts" interaction. But if one wants to understand the interactions, between the parts (which is really what I think we are talking about in the experiments) the "system view" will hide this. So no wonder evolution in hilbert space seems hard to make sense of.

/Fredrik
 
  • #115
Fra said:
Some papers relating perspectives
Thanks, these are helpful.
 
  • #116
Fra said:
My interpretation is that it's because the screen is just like the rest of us (an agent). Ie. If if we know the possible answers our decisions and behaviour reflect the uncertainty, but once we get the answer, it's precisely one of the possibilities, and our decisions align. I think the screen is no different.
Fra said:
I don't use the "agent" label as specific claim of the screen, I used it as a change of conceptual modelling perspective, as my opinon is that it is at the heart of the problem. Agent/observer or observed/matter is to me mainly a matter of perspective of inference (and not a ontological claim in any way). And as different agents can observer it each, and agent is just a normal physical system - see from an external perspective.
So you label the screen as agent, because it plays the role of observer in your modeling of the S-G experiment? I don‘t like this way of dropping the distinction between observer and agent. An observer suggest something passive, like the screen. An agent suggest something more active, like an information gathering and using system (IGUS).
You should at least clarify how the screen is utilizing the information, if you want to label it as an agent. Is it using the information to store it for later retrieval, like a hard disk? Or on the other side, for transforming itself, like in a nanofabrication process?
 
  • #117
gentzen said:
So you label the screen as agent, because it plays the role of observer in your modeling of the S-G experiment? I don‘t like this way of dropping the distinction between observer and agent. An observer suggest something passive, like the screen.
I agree. But the passive nature of the screen is an approximation that is valid just because its huge.

Fundamentally passive observer is a fiction to me. In practice passive observer is a limiting case.

But once you consider the actual limit. I loose track of explanations and how dynamical law emerge.
gentzen said:
An agent suggest something more active, like an information gathering and using system (IGUS).
You should at least clarify how the screen is utilizing the information, if you want to label it as an agent. Is it using the information to store it for later retrieval, like a hard disk? Or on the other side, for transforming itself, like in a nanofabrication process?
the microstate of a macroscopic screen is itself able to encode information ~ memory.

The internal physical processes in the screen and its interaction with the environment is the only "information processing" we need.

So to understand this "agent" is of coursr indistinguishable from understanding the in depth microstate and physical interactions of matter. Its only the perspective that differs.

also the agent is not not unique, just ad one screen can be thought of as beeinh made out of atoms with relations. An agent can been seen as a group of microagents.

/Fredrrik
 
  • #118
Fra said:
also the agent is not not unique, just ad one screen can be thought of as beeinh made out of atoms with relations. An agent can been seen as a group of microagents.
That is beside the point. After one has setup a model for a specific physical situation, the roles are fixed. Procrastinating over the fact that one could also have setup the roles and the model differently is not helpful. It even risks to confuse object level facts with meta-level stuff like:
The action to “acquire” devices on the other hand is on a kind of meta-level, which is of limited help for discussions of how to provide physical meanings and their connection to the formalism.
 
  • #119
gentzen said:
That is beside the point. After one has setup a model for a specific physical situation, the roles are fixed. Procrastinating over the fact that one could also have setup the roles and the model differently is not helpful. It even risks to confuse object level facts with meta-level stuff like:
I get your point, it does raise problems!

Also in the normal paradigm, we shouldnt confuse them.

But here we try to probe deeper I think this confusion is real and not only meta stuff. So my perspective is to accept the problems(including confusion) of what are object level facts and how those are contextual howto find an objective context in some limit. (Ie macroscopic reality or "classical world").

We might disagree on this.

/Fredrik
 
  • #120
Hello guys. OP again. I appreciate the discussion and thorough explanations for a layman. I have one more question about the measurement problem:
If randomness arises at measurement, and we can’t pinpoint how the collapse occurs, why is it that most physicists (according to polls I’ve seen online) consider that reality is fundamentally probabilistic, instead of deterministic, but with epistemic uncertainty?
Is this correct? Or, in a simpler sense, do experimental results and the math seem to lean more towards fundamental probabilities?
 
  • #121
ojitojuntos said:
If randomness arises at measurement, and we can’t pinpoint how the collapse occurs, why is it that most physicists (according to polls I’ve seen online) consider that reality is fundamentally probabilistic, instead of deterministic, but with epistemic uncertainty?
Is this correct? Or, in a simpler sense, do experimental results and the math seem to lean more towards fundamental probabilities?
Yes, this is the basic message of when experiment and QM violates Bells inqeuality.

Given some assumptions (that follow from a basic classical picture), if the results are only epistemic (in the sense of beeing due to physicists ignorance) then the inequality must hold - but it doesn't! That is the problem with the idea "deterministic, but with epistemic uncertainty".

/Fredrik
 
  • #122
Fra said:
Given some assumptions (that follow from a basic classical picture), if the results are only epistemic (in the sense of beeing due to physicists ignorance) then the inequality must hold
But there are QM interpretations where the results are epistemic and the assumptions you refer to are violated. For example, the Bohmian interpretation, in which the probabilities are purely due to our ignorance of the actual particle positions.
 
  • #123
Fra said:
this is the basic message of when experiment and QM violates Bells inqeuality.
Not quite. It's true that "deterministic, but with epistemic uncertainty" forces you to accept something like the Bohmian interpretation.

But "fundamentally probabilistic" forces you to accept that even though there is no way even in principle to predict in advance what the experimental results will be, the results for entangled particles measured at distant locations still have to obey the constraints imposed by the overall quantum state of the system. For example, measurements of spin around the same axis on two entangled qubits in the singlet state will always give opposite results. That always is what makes it very hard to see how a "fundamentally probabilistic" underlying physics could work--how could it possibly guarantee such a result every time?

In short, the real "basic message of when experiment and QM violates Bells inqeuality" is that nobody has a good intuitive picture of what's going on. There is no interpretation that doesn't force you to accept something that seems deeply problematic.
 
  • #124
ojitojuntos said:
Hello guys. OP again. I appreciate the discussion and thorough explanations for a layman. I have one more question about the measurement problem:
If randomness arises at measurement, and we can’t pinpoint how the collapse occurs, why is it that most physicists (according to polls I’ve seen online) consider that reality is fundamentally probabilistic, instead of deterministic, but with epistemic uncertainty?
Is this correct? Or, in a simpler sense, do experimental results and the math seem to lean more towards fundamental probabilities?
This thread has highlighted that ultimately it is perhaps a matter of personal taste whether "nature is fundamentally probabilistic" or not. Let's go back to the simple example of a single radioactive atom. Taken at face value, there is nothing in the description of the atomic state that determines when the atom will decay. That is nature being probabilistic at the fundamental level.

However, saying that the atom decays entails the complication of measurement and a suitable measurement apparatus. And, you could claim that if the state of everything in the experiment was known, then you would know in advance when the atom was measured to decay. And, no one can disprove this claim.

Moreover, given the complexity of a macroscopic measurement device, it's practically (and perhaps even theoretically) impossible to know its precise state. You would need to start by measuring the measuring device - entailing a much more extensive measurement problem.

I can't speak for professional physicists, but my instinct is to accept the first (probabiltistic) analysis. It feels closer to what nature is telling us. The second analysis seems to impose our thinking on nature. That, ultimately, no matter how loudly nature appears to be telling us that it's fundamentally probabiltistic, we appeal to an inately human demand for determinism to explain away the apparent probabilities. And demand that under it all there is actually pure determinism at work.
 
  • Like
Likes martinbn and renormalize
  • #125
PeterDonis said:
Not quite. It's true that "deterministic, but with epistemic uncertainty" forces you to accept something like the Bohmian interpretation.
PeterDonis said:
In short, the real "basic message of when experiment and QM violates Bells inqeuality" is that nobody has a good intuitive picture of what's going on. There is no interpretation that doesn't force you to accept something that seems deeply problematic.
Fair enough, but I didn't even count Bohmian mechanics, as it introduces so many new issues that are far more impalatable to me that the original problem, seeking increasingly more "improbable" loopholes etc :nb)

I only pay attention to it when Demystifier has a "bad day" and presents it like this.

/Fredrik
 
Back
Top