# Nobody understands quantum physics?

This is just, because the classical approximations are accurate enough for these purposes
and because classical approximations provide numbers rather than operators! Otherwise one never leaves the quantum domain and cannot make contact to the classical concepts in the description of experimental arrangements.
(though the item concerning the Born-Oppenheimer approximation in solid-state physics is not always applicable). Nothing in this proves the necessity of a Heisenberg cut.
Maybe not necessary but conspicuously present everywhere.
It's hard to accept for Copenhagenianers, but there's no hint at the claimed dichotomy between a classical and a quantum world.
It is the dichotomy in the description used, not in the world itself!

physika, mattt, gentzen and 1 other person
Gold Member
Instrumentalist accounts of QM (e.g. Asher Peres) present the Heisenberg cut as not existing in the world itself. At the same time, they present the cut as more than a useful approximation. They say classical language is the language of measurement outcomes, and so a "dequantisation" of lab equipment is a necessary translation.

Couchyam and Lord Jestocost
Gold Member
This is just, because the classical approximations are accurate enough for these purposes (though the item concerning the Born-Oppenheimer approximation in solid-state physics is not always applicable). Nothing in this proves the necessity of a Heisenberg cut. It's hard to accept for Copenhagenianers, but there's no hint at the claimed dichotomy between a classical and a quantum world. The classical behavior of macroscopic systems is an emergent phenomenon!
If it helps, this is what the "cut" is about I think.

So take two particles which interact with each other via the idealised coupling of some observable ##A## for the first particle and some observable ##B## for the second.
Let us also idealise this interaction and say if the incoming particle is in a state ##\ket{a}_{i}## and the target is in ##\ket{b}_{0}##, then when they interact they evolve as:
##\ket{a_{i}}\otimes\ket{b}_{0} \rightarrow \ket{a_{i}}\otimes\ket{b_{i}}##
Then if the first particle starts off in a superposition the evolution is:
##\left(\sum_{i}c_{i}\ket{a_{i}}\right)\otimes\ket{b}_{0} \rightarrow \sum_{i}c_{i}\ket{a_{i}}\otimes\ket{b_{i}}##
In other words we don't say the two particles obtain some well-defined, though possibly unknown, value of the product observable ##A\otimes B##.

However when a particle interacts with a macroscopic apparatus such as an emulsion film, we do tend to say that the product observable corresponding to the particle position and which particular grain of the emulsion was blackened do have a definite (though possibly unknown) value. In other words if ##B## is a macroscopic collective coordinate we treat things as if:
##\left(\sum_{i}c_{i}\ket{a}_{i}\right)\otimes\ket{b}_{0} \rightarrow \sum_{i}|c_{i}|^{2}\ket{a_{i}b_{i}}\bra{a_{i}b_{i}}##

On a mathematical level the "cut" really boils down to the statements that:
(i) We treat the macroscopic collective coordinates corresponding to dial readings as if they were perfectly described by classical probability theory.
(ii) It's not consistent to treat them otherwise.

It's not so much that macroscopic objects can't be treated with quantum theory.

This whole "cut" business is a fairly obscure topic that isn't really discussed in most texts. One might say it's similar to conceptual issues surrounding finite time evolution in QFT for example.

I found the following paper very helpful on the topic:
https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.126.130402
Arxiv here:
https://arxiv.org/abs/2003.07464

I learned of it from an online colloquium. The authors actually show a mathematical contradiction in that paper, so it's a clear exposition on it.

Last edited:
dextercioby
Gold Member
IMO saying no one understands QT is more a statement about how our minds are structured than a statement about the theory.
I'm not even sure about that. I first encountered entanglement in a completely un-dramatic, almost purely mathematical context. I have a very vivid memory of a friend later trying convince me of how intuitively weird the phenomenon is, but I just got stuck saying "of course the measurements come out that way" over and over because I had in my head $$\frac{1}{\sqrt{2}} \left ( |0\rangle_A \otimes |1\rangle_B - |1\rangle_A \otimes |0\rangle_B \right )$$ and my friend had in his head spinning charged balls or something of the sort.

That said, I know it's 1st semester freshman physics, but whenever I see an ice skater pull their arms in and spin faster, it still looks like magic to me. Whenever I see a video of a tornado or hurricane, I know how it works logically, but I still intuitively feel that there must be a thermodynamic law being violated somehow. So I've never really gotten the hubbub about why quantum mechanics is uniquely unintuitive.

Gold Member
That said, I know it's 1st semester freshman physics, but whenever I see an ice skater pull their arms in and spin faster, it still looks like magic to me.
Yeah, that and magnets. Of course people understand all these things through abstraction. This entire thread is essentially about how people feel about quantum mechanics. As far as I'm concerned formal understanding is all that's possible for some phenomena. People are free to feel otherwise.

PhDeezNutz and TeethWhitener
lodbrok
I never understood this argument. E.g., using a silicon detector to detect photons doesn't mean to use a "classical device". It's based on the photoeffect and thus relies on (at least semiclassical) quantum theory to be understood.
Nit-picking, I know, but are you implying that the silicon detector needs to understand quantum theory before it can detect a photon? I conjecture that physical things/phenomena don't care what theories we use to describe them.

Simple question and Lord Jestocost
Gold Member
Nit-picking, I know, but are you implying that the silicon detector needs to understand quantum theory before it can detect a photon? I conjecture that physical things/phenomena don't care what theories we use to describe them.
I assume he means the silicon detector must have a quantum mechanical description if we want to model the dynamics of the measurement process. The "Heisenberg cut" is more like a "Heisenberg overlap" where there is an ambivalence of description of the measurement device: A quantum description to consistently model dynamics of the detector and source, and a classical description to accommodate the recorded data.

LittleSchwinger
Couchyam
There's no limit on measurement. I don't know, where this fairy tale comes from. It's often written in popular-science textbooks, but it's wrong.
What I meant is that once you try to dissect the process of wave function collapse, you inevitably need to make sense of another statistical ensemble of collapsed wave functions (you can’t ‘circumvent’ the problem of wave function collapse to probe its physical properties the way one can other phenomena, such as entanglement.) Independently, there is an experimental practical limit to how quantum systems can be measured (but this limit isn’t integral to the theory itself.)
What quantum theory tells us is that it is impossible to prepare a quantum system such that all observable take determined values. A state of "complete knowledge" is a pure state, and it's uniquely determined, when preparing the system such that a complete set of compatible observables take determined values. Usually observables which are not compatible to this complete set then to not take determined values.
Quantum theory (and experiment) also tells us that when a wave function is measured in a particular basis, it collapses onto one of several allowed eigenvectors of the observable in question. Note that in quantum theory, a complete set of observables only exists in principle. In practice, there is a limit to the number of observables that are experimentally realizable; to be clear, this limit has nothing to do with the ‘limit’ that quantum theory imposes on the predictability of measurement outcomes.
At the present stage of our knowledge, we cannot say whether there is a collapse of the quantum state that goes beyond standard QT or not. For sure it's not the hand-waving addition to the well-defined formalism of QT one often reads in textbooks promoting some flavors of the Copenhagen interpretation, which include a collapse postulate. I've never found any necessity to assume a collapse to apply QT to the description of real-world experiments.
You probably have never conducted a real-world quantum experiment. All quantum experiments have noise: the interference patterns produced by electrons passing through a diffraction grating consist of small blips or dots produced by individual electrons whose wave function literally collapsed somehow from an extended smooth complex distribution to a point. Quantum physics says that there is literally no way to predict how this collapse happens: you can explain how the classical probability distribution appears on the screen through entanglement of electrons with many-body degrees of freedom, but it cannot predict (a priori) how an individual outcome is selected from the large number of possibilities.
Further there's no hint at a "classical-quantum divide" aka "Heisenberg cut". Today ever larger systems have been used to verify "quantum effects". E.g., the ~10kg mirrors of the LIGO experiment show quantum behavior.
But the 10kg mirrors of the LIGO experiment also exhibit classical behavior, and one might go so far as to say that the classical behavior of the LIGO mirrors (e.g. their visibility, or their elasticity, or their weight) vastly overwhelms any quantum phenomena that might be observable intermittently. I think a quantum model of a LIGO mirror would also struggle to account for how the wave function evolves between partial wave function collapses of various parts of the mirror. Is it possible that, among systems whose fastest observation update (measurement/wave function collapse) rate approaches the shortest time scale of the system, the behavior is essentially classical? From a certain perspective, the classical quantum divide is obvious: time is measured ‘classically’, not quantum mechanically (try devising a version of quantum mechanics with a fully intrinsically quantum time variable), and this has to do with the fact that we are immersed in an environment where wave function collapses happen almost continuously: you could claim that our most accurate measure of time is with quantum oscillations of an atomic clock, but this is besides the point (the state of the clock must be measured, and there exist other faster if less statistically regular measurement processes in nature.)

It’s possible that somehow (random) wave function collapse is an illusion of sorts, and that some kind of Schroedinger equation still describes reality at a fundamental scale, but my point is that it is extremely difficult to imagine how that might work, and quantum physics as it is conventionally understood offers no clues (and furthermore says that clues beyond what quantum theory gives us already are literally physically impossible.)

physika
Gold Member
2022 Award
I don't see any evidence for collapse. It depends on the measurement device, in which state the coupled quantum system and the measurement device (and the "environment") will be.

That macroscopic objects behave "classically" is an emergent phenomenon. It results from coarse graining to obtain effective theories for macroscopic, collective observables, which in this sense always build an open quantum system and thus are subject to decoherence, leading to classical behavior.

Gold Member
That macroscopic objects behave "classically" is an emergent phenomenon. It results from coarse graining to obtain effective theories for macroscopic, collective observables, which in this sense always build an open quantum system and thus are subject to decoherence, leading to classical behavior.
The paper I linked is essentially a demonstration of a contradiction if you treat a process as a measurement without irreversible decoherence. That's really all it involves. Similar to what Morbert mentioned above.

vanhees71
Gold Member
2022 Award
Sure a measurement must involve irreversible decoherence, because you want to store the measurement result (at least long enough to "read it out" of your equipment and evaluate it), but that's nothing which is not understandable within quantum theory of open systems. You don't need to envoke the classical behavior of the measurement device as and additional assumption, i.e., the claim that on a fundamental level "large enough" systems are not described by QT but classical physics. It's rather derivable from quantum many-body theory, and one can nowadays even demonstrate quantum effects on macroscopic objects like this example with the LIGO mirrors (making use, btw, of quantum features of light, i.e., squeezed states!).

hutchphd and LittleSchwinger
Gold Member
In chapter 9 “The problem of the interpretation of quantum theory” of the book “The Structure of Physics”/1/, Carl Friedrich von Weizsäcker puts it in a simple way:

##\gamma##) The role of the observer in the Copenhagen interpretation

To recapitulate, we have made it qualitatively we have made it plausible that the quantum theory of measurement proves the semantic consistency of the "orthodox" interpretation; we will present a formal model in the next section. Yet it has not diminished by one bit the necessity of an explicit reference to knowledge. The ##\Psi##-function is defined as knowledge. The reduction of the wave packet is not a dynamical evolution of the ##\Psi##-function in accordance with the Schrödinger equation. Rather, it is identical to the event in which an observer recognizes a fact. It does not happen so long as only the measured object and measurement apparatus interact, nor so long as the apparatus has not been read out after the measurement interaction ends; it is the gain of knowledge associated with reading. [red by LJ]

/1/ “The Structure of Physics” is a newly arranged and revised English version of "Aufbau der Physik" by Carl Friedrich von Weizsäcker

Gold Member
Sure a measurement must involve irreversible decoherence, because you want to store the measurement result (at least long enough to "read it out" of your equipment and evaluate it), but that's nothing which is not understandable within quantum theory of open systems
Certainly. In fact as far as I can see older authors thought so as well, but lacked the actual theory of decoherence to make the argument solid. You see references to "thermal effects" or the many body nature of the device and so on in Bohr and others.

Anyway we have many roads to the "approach to classicality" these days. Decoherence, coarse-graining, ergodic processes, purely kinematic arguments, newer ones like the reduction of the observable algebra and so on.

gentzen and vanhees71
WernerQH
Anyway we have many roads to the "approach to classicality" these days. Decoherence, coarse-graining, ergodic processes, purely kinematic arguments, newer ones like the reduction of the observable algebra and so on.
Offering too many solutions to a problem may raise suspicions.
Isn't the emergence of "classicality" what you are tempted to read into these formalisms?
How come so many people still puzzle over the measurement problem?

physika and vanhees71
Gold Member
Offering too many solutions to a problem may raise suspicions.
These are all just different processes or effects that contribute to classicality. Contributions differ in different systems. If somebody finds that "suspicious" I don't know what to say.

gentzen
WernerQH
These are all just different processes or effects that contribute to classicality.
It could mean that you may not yet have identified the true "process" or "effect" that leads to classicality. Or the change of viewpoint that would make it obvious (even to those wondering why measurements lead to unique outcomes) how quantum mechanics contains classical mechanics as a limiting case.

Offering too many solutions to a problem may raise suspicions.
Off Topic: Gauss gave four different proves of the quadratic reciprocity law. They were all sound. Now there are others. No one is suspicious.

gentzen and vanhees71
Gold Member
2022 Award
Offering too many solutions to a problem may raise suspicions.
Isn't the emergence of "classicality" what you are tempted to read into these formalisms?
How come so many people still puzzle over the measurement problem?
Perhaps in lack of interest in dealing with the real problems?

WernerQH
Gold Member
It could mean that you may not yet have identified the true "process" or "effect" that leads to classicality
I don't think that's a sensible way of looking at these things. If you look at non-equilibirum studies in statistical mechanics there are several processes that drive a system to equilibirum, thermalisation, etc. It would be nonsensical to look for the "true" one, when they are all present and contribute.

gentzen and vanhees71
WernerQH
I don't think that's a sensible way of looking at these things.
I'm not concerned about the measurement problem at all. But I do believe there is a real, conceptual problem, but it's not about "measurement". It's not about "processes" or "effects" leading to the supposed "collapse of the wave function".

how quantum mechanics contains classical mechanics as a limiting case.
Quantum theory is the better, more comprehensive theory. But it depends on classical physics for its formulation. Isn't there some circularity here? How can a theory that should supplant classical mechanics be dependent on it?

Gold Member
2022 Award
Where does the fundamental formulation of QT depend on classical physics?

Of course, the one real problem is gravity and since gravity is strongly entangled (pun intended) with the spacetime model in a sense indeed there's a classical aspect in the formulation of relativity, i.e., the spacetime model is entirely classical. Indeed in this sense there's a real scientific problem concerning the completeness of QT as the "theory of everything", but I'm pretty sure that it has nothing to do with any vaguely formulated philosophical problem about measurement.

LittleSchwinger
Fra
I don't see any evidence for collapse.
Are you never suprised?

/Fredrik

physika and gentzen
WernerQH
Where does the fundamental formulation of QT depend on classical physics?
Can you refer to a formulation of quantum theory that does not use the term "measurement"? And doesn't measurement require classical apparatus?

physika
Fra
Where does the fundamental formulation of QT depend on classical physics?
It does not depende on classical theory per see, but the theoretically perfect confidence in the distributions IMO requires a solid reference - "classical world", without it, not only is the future fuzzy, even the cloud itself is fuzzy. Thats not to imply that QM depends on newtonis mechanics as such.

in this sense there's a real scientific problem concerning the completeness of QT as the "theory of everything", but I'm pretty sure that it has nothing to do with any vaguely formulated philosophical problem about measurement.
Wether this has nothing to do with it is where we disagree. My only question is, given that one cares at all, if this ha nothing todo with QG - what does? I mean, what is the alternative? For me the path seems nasty but i see no other path.

/Fredrik

gentzen
Gold Member
2022 Award
I'm convinced that we'll never get something physical out of a mere philosophical quibble about a pseudoproblem. Obviously no philosophical gibberish has brought us any close to an ansatz of how to formulate "quantum gravity". I think we'll need some empirical hint, but I don't see anything in sight yet.

LittleSchwinger and PhDeezNutz
Gold Member
@vanhees71 , this is more out of curiosity about how you would phrase it, I'm not debating as such.

We commonly say quantum theory is a probability theory where not all quantities take well-defined values at once. So if we start with a particle in an eigenstate of z-axis spin, ##S_{z}##, such as ##\ket{\uparrow}## then we know that ##S_{x}## doesn't have a well-defined value.

If we do an ##S_{x}## measurement however we do end up in a state with a well-defined value for ##S_{x}##. "When" in the measurement process do you think this well-defined value was obtained?

gentzen and dextercioby
Gold Member
2022 Award
It's just by filtering. Take the Stern-Gerlach experiment which can be designed as an almost perfect filter measurement, i.e., you use an inhomogeneous magnetic field with the right properties to split the beam in two spatially well separted parts, of which you know (from unitary quantum dynamics by the way) that position and spin-component (which one is selected by the direction of the large homogeneous part of the magnetic field) are almost perfectly entangled, i.e., blocking one partial beam prepares a beam with determined spin component. It's clear that you can prepare only one spin component and not more, i.e., the spin components in other directions don't take determined values, and the so prepared quantum state implies the and only the probabilities for the results of measurements of any spin component. Measuring the prepared spin component gives with 100% probability the prepared value, i.e., this and only this spin component takes a determined value. The preparation is obtained after the beams got sufficiently well separated due to the (unitary) dynamics of the particles moving in the magnetic field.

LittleSchwinger
Gold Member
The preparation is obtained after the beams got sufficiently well separated due to the (unitary) dynamics of the particles moving in the magnetic field.
Okay thanks. That basically lines up with what I first learned from Gottfried (1st Ed sans Yan) and the old Schwinger "Humpty-Dumpty" paper.

dextercioby and vanhees71
If we do an ##S_{x}## measurement however we do end up in a state with a well-defined value for ##S_{x}##. "When" in the measurement process do you think this well-defined value was obtained?
Measuring the prepared spin component gives with 100% probability the prepared value, i.e., this and only this spin component takes a determined value. The preparation is obtained after the beams got sufficiently well separated due to the (unitary) dynamics of the particles moving in the magnetic field.
This does not answer the query. Prepared is a superposition without definite values of ##S_x##. But measured is one of the values ##\pm1##, let us say ##+1##. The question is when, in a quantum description of the detector, the definite value ##+1## is obtained.

dextercioby and mattt
Gold Member
2022 Award
After the magnet the spin component in direction of the field is (almost completely) entangled with position, i.e., in each of the two partial beams "selected" by the magnet you have a well-prepared spin component.

Gold Member
This does not answer the query. Prepared is a superposition without definite values of ##S_x##. But measured is one of the values ##\pm1##, let us say ##+1##. The question is when, in a quantum description of the detector, the definite value ##+1## is obtained.

To quote the Schwinger essay mentioned by @LittleSchwinger:

"Therefore, the mathematical scheme for microscopic measurements can certainly not be the representation of physical properties by numbers. [...] we must instead look for a new mathematical scheme in which the order of performing physical operations is represented by an order of performance of mathematical operations. The mathematical scheme that was finally found to be necessary and successful is the representation, in a very abstract way, of physical properties not by numbers but by elements of an algebra for which the sense of multiplication matters."

"If you know the state, you can then predict what the result of repeated trials of measurement of a particular physical property will be. You will have perfectly determinate, statistical predictions but no longer individual predictions."

"The knowledge of the state does not imply a detailed knowledge of every physical property but merely, in general, of what the average or statistical behavior of physical properties may be."

i) We should be careful not to attribute a property like ##S_x = +1## to the object of measurement. ##+1## would only be attributed to the classical datum post-measurement. If we want to speak about properties of the object of measurement, we would use a representation like ##\Pi_{S_x=+1}##. This more robust representation frees us of worrying about when a particular property does or doesn't obtain. Only the ordering in consideration of measurement operations is important.

ii) We do not have to associate the preparation of the object of measurement with the moment a microscopic property obtains. The quantum state is not an assertion of what properties the system has at any given time. It is only an assertion of what future statistics can be expected.

 - Tidied up a bit

Last edited:
LittleSchwinger
Gold Member
2022 Award
To quote the Schwinger essay mentioned by @LittleSchwinger:

"Therefore, the mathematical scheme for microscopic measurements can certainly not be the representation of physical properties by numbers. [...] we must instead look for a new mathematical scheme in which the order of performing physical operations is represented by an order of performance of mathematical operations. The mathematical scheme that was finally found to be necessary and successful is the representation, in a very abstract way, of physical properties not by numbers but by elements of an algebra for which the sense of multiplication matters."

"If you know the state, you can then predict what the result of repeated trials of measurement of a particular physical property will be. You will have perfectly determinate, statistical predictions but no longer individual predictions."

"The knowledge of the state does not imply a detailed knowledge of every physical property but merely, in general, of what the average or statistical behavior of physical properties may be."
That's the most precise no-nonsense statement I can think of. Indeed, this introductory chapter of Schwinger's textbook is a must-read for anybody interested in the interpretational issues of QT.

i) We should be careful not to attribute a property like ##S_x = +1## to the object of measurement. ##+1## would only be attributed to the classical datum post-measurement. If we want to speak about properties of the object of measurement, we would use the representation ##\Pi_{S_x=+1}##. This more robust representation frees us of worrying about when a particular property does or doesn't obtain. Only the ordering in consideration of measurement operations is important.
Yes, the state implies the probabilities for the outcome of any measurement you can do on the prepared system. If the state is such that the outcome for an observable like ##P(s_x=+\hbar/2)=1##, then this observable takes a determined values ##\hbar/2##, otherwise it's value is indetermined, and only the probalities for either possible outcome is given by the preparation in that state.
ii) We do not have to associate the preparation of the object of measurement (e.g. by blocking the other beam) with the moment a microscopic property obtains. The quantum state is not an assertion of what properties the system has at any given time. It is only an assertion of what future statistics can be expected.
The state represents a preparation procedure. It predicts probablistic and only probabilistic properties about the outcome of future measurements.

LittleSchwinger
Couchyam
It's just by filtering. Take the Stern-Gerlach experiment which can be designed as an almost perfect filter measurement, i.e., you use an inhomogeneous magnetic field with the right properties to split the beam in two spatially well separted parts, of which you know (from unitary quantum dynamics by the way) that position and spin-component (which one is selected by the direction of the large homogeneous part of the magnetic field) are almost perfectly entangled, i.e., blocking one partial beam prepares a beam with determined spin component. It's clear that you can prepare only one spin component and not more, i.e., the spin components in other directions don't take determined values, and the so prepared quantum state implies the and only the probabilities for the results of measurements of any spin component. Measuring the prepared spin component gives with 100% probability the prepared value, i.e., this and only this spin component takes a determined value. The preparation is obtained after the beams got sufficiently well separated due to the (unitary) dynamics of the particles moving in the magnetic field.
Remember that the signal produced in the original Stern Gerlach experiment consisted of two well separated clusters of points on a screen, not a single pair. The experiment strongly hints at the existence of a spin degree of freedom that is entangled, but it also hints at the existence of a number of other configurational degrees of freedom associated with the wave function that ultimately must somehow collapse to a point with each trial in order for any experimental data to be obtained at all. It's possible that similar (uncontrolled) filtering mechanisms are at work in separating each measurement outcome, but verifying that requires performing a second experiment (trying to 'dissect' the wave function collapse process, to determine each filtering mechanism in turn.)

At a certain point, you need to explain why the Hamiltonian of a system is definite and not itself described by a wave function or density operator. It could be that the Hamiltonian is a macroscopic statistical average that emerges from a random ensemble of definite (observed) events, or that it is somehow connected to the ability of the experimenter to both modulate a system and 'read' its character, or something else entirely, but quantum mechanics in its traditional formulation makes it very difficult to tell which of these perspectives if any is in fact closest to the truth.

physika
Mentor
At a certain point, you need to explain why the Hamiltonian of a system is definite
I'm not sure what you mean by "definite". The Hamiltonian is an operator.

vanhees71
Couchyam
I'm not sure what you mean by "definite". The Hamiltonian is an operator.
What I mean is that the Hamiltonian is (conventionally understood as) a 'definite operator', as opposed to an operator-valued random number generator, or a wave function over a space of operators (or part of some larger wave function of the universe.) It might change over time, and it might be impossible to measure its components exactly, but at every instant it has (in principle) a well-defined value. If a formulation of quantum mechanics existed in which the Hamiltonian wasn't definite, the authors of the theory would need to explain the exact nature of its indefiniteness very carefully (possibly by appealing to a more fundamental mechanism for time evolution.) There may be some way for a Hamiltonian to 'emerge' from a wave function that either lacked inherent dynamics or was described by some completely strange-looking but unitary 'super-Hamiltonian', but that would mark a departure from the conventional picture.

Last edited:
Fra