Problems with Paper on QM Foundations

In summary, Maudlin thinks that the claims in the 1A-1C section of the paper are inconsistent and that they should not be combined. He also thinks that the claims in the paper do not describe what quantum theory says.
  • #1
10,776
3,637
I was on you-tube and saw a video from Oxford on QM foundations. I didn't agree with it, but that is not an issue - I disagree with a lot of interpretational stuff. The video mentioned a paper they thought essential reading:
https://www.mathematik.uni-muenchen.de/~bohmmech/BohmHome/files/three_measurement_problems.pdf

Is it me, or has the author not shown the appropriate care? In particular, they claim theories that violate 1a are hidden variable theories. I thought - what - how does that follow. It may simply mean nature is fundamentally probabilistic. Or, in other words, the three assumptions are not inconsistent.

Specifically, I do not think the following is logically justified:
'And since we are interested in individual cats and detectors and electrons since it is a plain physical fact that some individual cats are alive and some dead, some individual detectors point to "UP" and some to "DOWN", a complete physics, which is able at least to describe and represent these physical facts, must have more to it than ensemble wave-functions.'

My response is - that might be your idea of what compete physics is, but it might be best not to assume that is everyone's idea of complete physics. Einstein, of course, thought QM incomplete, but I am not sure that is necessarily why.

Thanks
Bill
 
Last edited:
  • Skeptical
  • Like
Likes Demystifier and vanhees71
Physics news on Phys.org
  • #2
I come more and more to the conclusion that it is a waste of time to read papers of this kind. What's written under 1A-1C is not even a correct description of what quantum theory says. I don't need to repeat my standard description of what a pure state, which indeed is the most determined state of a system within QT but implies only probabilistic properties for the outcome of measurements and nothing more.

For me the issue is solved with the results of all the high-precision Bell tests. Quantum theory today has been confirmed by all these tests, and for me the only necessary interpretation is the minimal statistical interpretation. The description of the measurement process in the most general sense seems to need a description in terms of POVMs rather than with idealized von Neumann filter measurements, but that's imho not in contradiction to the minimal statistical interpretation.
 
  • Like
Likes bhobba
  • #3
bhobba said:
My response is - that might be your idea of what compete physics is, but it might be best not to assume that is everyone's idea of complete physics. Einstein, of course, thought QM incomplete, but I am not sure that is necessarily why.
It is quite easy to construct situations which allow two different descriptions. Namely, take an experiment and describe it in the Copenhagen interpretation with two different quantum-classical cuts. Then you have some intermediate system, which you can, on the one hand, describe with a wave function, but, on the other hand, also with a trajectory.

In this situation, it is quite obvious that the trajectory contains more information. And it is real information - at least if that system is macroscopic and you can see it yourself. But the quantum description of that part is also valid, makes the same empirical predictions (decoherence and so on).

So, we have a quantum system, well described by a quantum wave function, but, on the other hand, the information it contains is incomplete.
 
  • #4
vanhees71 said:
What's written under 1A-1C is not even a correct description of what quantum theory says.
And where it is claimed that it should be a description of what quantum theory says? The claim is that these three claims taken together are inconsistent. If combined with your claim, it would follow that what quantum theory says is inconsistent. Which is certainly not Maudlin's intention.
vanhees71 said:
for me the only necessary interpretation is the minimal statistical interpretation.
When why you participate in such philosophical discussions?
 
  • Like
Likes Demystifier
  • #5
bhobba said:
https://www.mathematik.uni-muenchen.de/~bohmmech/BohmHome/files/three_measurement_problems.pdf

Is it me, or has the author not shown the appropriate care? In particular, they claim theories that violate 1a are hidden variable theories. I thought - what - how does that follow. It may simply mean nature is fundamentally probabilistic.
Maudlin is not a crackpot. His point of view is (with Einstein) that a 'system' is 'a single quantum system', A complete description of a system in this sense cannot be an ensemble description. For the latter refers only to collections of similarly prepared systems. Hence it says nothing at all about the single system, except in the very special circumstance that a property is exactly the same for all members of the ensemble.
 
  • Like
Likes Lynch101, Demystifier, Doc Al and 2 others
  • #6
I'd say that the biggest problem causing this is the intrinsic 'hive mentality' of humans; students of professors almost always end up becoming a proponent of X view that he/she holds. Foundational topics also spread very geographically; look at how Everett's resurgence happened almost exclusively in Oxford due to David Deutsch and David Wallace or equally Bohmian Mechanics at Rutgers due to Sheldon Goldstein and Roderich Tumulka.

I honestly believe that the foundational issues of QM are so complex and inherently reflective of whatever perspective you are presented with through 'appeal to authority' causes this. This is also how Niels Bohr's view became the 'adopted fact' for almost a century...
 
  • Like
Likes MathematicalPhysicist and dextercioby
  • #7
A. Neumaier said:
Maudlin is not a crackpot.
Among others he is author of a book ''Philosophy of physics: quantum theory'' (2019) which discusses the philosophy of deterministic approaches to quantum physics, favoring Bohmian mechanics but also discussing GRW and MW theories.
A. Neumaier said:
His point of view is (with Einstein) that a 'system' is 'a single quantum system', A complete description of a system in this sense cannot be an ensemble description. For the latter refers only to collections of similarly prepared systems. Hence it says nothing at all about the single system, except in the very special circumstance that a property is exactly the same for all members of the ensemble.
Indeed, Maudlin writes (p.10, column 2, top):
Tim Maudlin said:
But this ensemble interpretation does not avoid the trilemma - it simply directly denies 1.A. According to this approach, the wave-function is not a complete physical description of any individual detector or cat or electron. And since we are interested in individual cats and detectors and electrons, since it is a plain physical fact that some individual cats are alive and some dead, some individual detectors point to "UP" and some to "DOWN", a complete physics, which is able at least to describe and represent these physical facts, must have more to it than ensemble wave-functions.
 
Last edited:
  • Like
Likes Demystifier, bhobba, dextercioby and 1 other person
  • #8
A. Neumaier said:
Maudlin is not a crackpot.

I never thought he was. What I believe is his notion of complete description is not mine. I take it to mean a complete description, along with observables, of calculating probabilities. In fact, with the assumption of non-contextuality, using Gleasons Theorem, we know states must exist, and all they can do is predict probabilities. Being a theorem, they are then simply a mathematical aid in calculations. The primary 'axiom' seems to be given an observation we can find an observable whose outcomes are its eigenvalues. One of the issues with QM is words can mean different things to different people. It is especially apparent with the word 'observation', which some take to mean human observation. In QM, it means, more precisely, an interaction that leaves a mark here in the macro world. But even that is not exact enough, and issues remain. I define it as after decoherence occurs. However, I realize these days that also has problems. It is an issue decoherent histories grapples with by trying to base it on the concept of history, which has an exact meaning. The ensemble interpretation, analogous to the frequentist interpretation of probability (which has the issue that the law of large numbers is only valid for infinite 'experiments'), is a 'practical' interpretation that can't answer fundamental matters. However, it is used all the time in applications.

A. Neumaier said:
Among others he is author of a book ''Philosophy of physics: quantum theory'' (2019) which discusses the philosophy of deterministic approaches to quantum physics, favoring Bohmian mechanics but also discussing GRW and MW theories.

Interestingly that was the conclusion of the youtube video that got me to look at the paper.

Thanks
Bill
 
Last edited:
  • #9
Sunil said:
So, we have a quantum system, well described by a quantum wave function, but, on the other hand, the information it contains is incomplete.

Not sure how that fits in with Gleasons Theorem. But your point as to where to put the classical/quantum cut is well taken. That is indeed a complex problem not entirely solved by decoherence. I think approaches that take everything as quantum from the start may be on the right track.

Thanks
Bill
 
  • #10
A. Neumaier said:
Among others he is author of a book ''Philosophy of physics: quantum theory'' (2019) which discusses the philosophy of deterministic approaches to quantum physics, favoring Bohmian mechanics but also discussing GRW and MW theories.

I had a look at the preview and reviews on Amazon and thought, what the heck - got a copy. It should prove interesting.

Thanks
Bill
 
  • Like
Likes Doc Al
  • #11
bhobba said:
his notion of complete description is not mine. I take it to mean a complete description, along with observables, of calculating probabilities.
So you are interpreting 'complete' not in the sense of Einstein (EPR, 1935) but in the sense of Born and Heisenberg (Como 1928). These two senses are the extreme poles between which the various interpretations must find a place.

My just finished paper 'Quantum mechanics via quantum tomography' reconciles the two extremes. Probability in quantum physics gets the same status as probability in classical physics. This justifies Einstein's position while not changing the core of the position of Born and Heisenberg that quantum mechanics is a complete modeling framework for physics. But Born's view on probability is modified to accommodate the techniques of modern quantum information theory.
 
Last edited:
  • Informative
  • Love
Likes atyy and bhobba
  • #12
A. Neumaier said:
So you are interpreting 'complete' not in the sense of Einstein (EPR) but in the sense of Born and Heisenberg. quantum information theory.

I had not thought of that. I interpret it in the sense of Gleason's theorem. It more or less fixes it as just an aid to doing probability calculations. Your paper should prove interesting - thanks for the link.

Thanks
Bill
 
  • #13
bhobba said:
I had not thought of that. I interpret it in the sense of Gleasons theorem. It more or less fixes it as just an aid to doing calculations.
Gleason's theorem is just a mathematical result. It doesn't involve the notion of being 'complete', hence cannot give the latter a meaning.
 
  • #14
A. Neumaier said:
Gleason's theorem is just a mathematical result. It doesn't involve the notion of being 'complete', hence cannot give the latter a meaning.

Wait a minute. Assume non-contextuality; then, it is the only way to assign probabilities to the eigenvalues of Quantum Operators. To me, that shows it is complete. It is all the state can tell us. Again - am I missing something? It also, of course, has Kochen-Specker as a simple corollary.

As an aside I recently found an interesting Einstein quote:

In 1931, Einstein said of Paul Dirac:

“Dirac, to whom, in my opinion, we owe the most perfect exposition, logically, of this [quantum] theory, rightly points out that it would probably be difficult, for example, to give a theoretical description of a photon such as would give enough information to enable one to decide whether it will pass a polarizer placed (obliquely) in its way or not.”

Thanks
Bill
 
  • #15
bhobba said:
Assume non-contextuality; then, it is the only way to assign probabilities to the eigenvalues of Quantum Operators. To me, that shows it is complete.
You interpret Gleason's theorem in the light of your already assumed meaning of 'complete'.

Why should you want to assign probabilities to eigenvalues? What has this to do with completeness? In Einstein's view, probabilities are proof of incompleteness, since they lack information about single systems.
bhobba said:
It is all the state can tell us.
The state assigns a quantum value to each operator. This is much more - and much more useful - than assigning probabilities to eigenvalues. The latter works only for normal operators, which excludes important operators such as annihilation operators (whose eigenvectors are coherent states).
 
  • #16
A. Neumaier said:
Why should you want to assign probabilities to eigenvalues?

In QM, the eigenvalues are assumed as the possible values of the associated observation. What we want to do is figure out what probability the outcome has. Gleason shows it can be done if we know this thing we call the state.

Thanks
Bill
 
  • #18
A. Neumaier said:
Only in pre-1970 quantum mechanics, not in the modern POVM view of measurement. This is the point of my paper 'Quantum mechanics via quantum tomography'.

I have gotten a book written using the view that POVM's are fundamental - Foundations of QM - An Empiricist Approach. I want to study it before commenting further. I do know that Gleason is much easier to prove using POVM's and, in that sense, is a better foundation. But I need to study more to understand POVM's as the basis of QM rather than Von-Neumann decompositions.

Thanks
Bill
 
  • #19
bhobba said:
I have gotten a book written using the view that POVM's are fundamental - Foundations of QM - An Empiricist Approach.
  • W.M. De Muynck, Foundations of Quantum Mechanics, an Empiricist Approach, Kluwer, 2002.
bhobba said:
I need to study more to understand POVM's as the basis of QM rather than Von-Neumann decompositions.
Starting with my paper is probably easier. I tried to make it readable with little sophistication beyond simple linear algebra, and detailed (I hope) all intermediate steps in calculations that in advanced treatments would be left to the reader.
 
  • Like
Likes bhobba
  • #20
A. Neumaier said:
  • W.M. De Muynck, Foundations of Quantum Mechanics, an Empiricist Approach, Kluwer, 2002.
On p.181, he says: ''In Einstein‘s view a theory making statistical statements is necessarily an incomplete theory.''
 
  • Like
Likes bhobba
  • #21
bhobba said:
What I believe is his notion of complete description is not mine. I take it to mean a complete description, along with observables, of calculating probabilities.
What he really means by complete is that a complete theory must describe ontology. That's why he insists on description of single systems, because single systems really exist, i.e. they are ontic. It has nothing to do with the question whether the nature is deterministic or probabilistic. (Note also that the word "determinate" in 1.C does not mean deterministic, but ontic.)

Let me describe in more detail why referring to fundamental probability does not help. If you point out that the complete laws of physics describe only the probability, Maudlin (or someone with a similar style of thinking) will say: "Fine, probability of what?" If you reply - probability of a measurement outcome, he might ask something like: "Fine, and what exactly a measurement is?" If you start to argue that it has something to do with interaction, he might object: "But not every interaction counts as measurement, how to know which interaction is and which interaction isn't a measurement?" The discussion will continue in that spirit and at some point in such a discussion you will probably run out of clear answers, at which point he would conclude: "You don't have an answer, so your view of QM is incomplete. I rest my case."
 
Last edited:
  • Informative
  • Like
Likes bhobba and WernerQH
  • #22
bhobba said:
But I need to study more to understand POVM's as the basis of QM rather than Von-Neumann decompositions.
Note that any POVM measurement in a "small" Hilbert space of the measured system can also be understood as a projective von Neumann measurement in a larger Hilbert space, which treats the measuring apparatus as a part of the full quantum system.
 
  • Like
Likes bhobba
  • #23
Demystifier said:
"Fine, probability of what?" If you reply - probability of a measurement outcome, he might ask something like: "Fine, and what exactly a measurement is?"
This kind of questions can be continued indefinitely, even for Bohmian mechanics, where one can ask; ''Fine, and what exactly is a particle?'' - ''A point with definite position.'' - ''How can a point with no extension be observable?'' - "You don't have an answer, so your view of QM is incomplete. I rest my case."
Demystifier said:
Note that any POVM measurement in a "small" Hilbert space of the measured system can also be understood as a projective von Neumann measurement in a larger Hilbert space, which treats the measuring apparatus as a part of the full quantum system.
This is a misstatement of Naimark's theorem, which asserts that for every POVM there exists a (nonphysical, purely mathematically constructed) Hilbert space in which a projection operator exists whose reduction to the original (physical) Hilbert space reproduces the POVM. Nothing at all guarantees that the larger Hilbert space is physical and/or contains the measurement apparatus.
 
  • Like
Likes bhobba, dextercioby and gentzen
  • #24
A. Neumaier said:
This is a misstatement of Naimark's theorem, which asserts that for every POVM there exists a (nonphysical, purely mathematically constructed) Hilbert space in which a projection operator exists whose reduction to the original (physical) Hilbert space reproduces the POVM. Nothing at all guarantees that the larger Hilbert space is physical and/or contains the measurement apparatus.
You are right that the Naimark's theorem does not guarantee that the larger Hilbert space is physical, but no explicit counterexample has ever been found.
 
  • Like
Likes bhobba and gentzen
  • #25
A. Neumaier said:
This kind of questions can be continued indefinitely, even for Bohmian mechanics, where one can ask; ''Fine, and what exactly is a particle?'' - ''A point with definite position.'' - ''How can a point with no extension be observable?'' - "You don't have an answer, so your view of QM is incomplete. I rest my case."
Right, all interpretations are incomplete, one way or another. My point was to explain why referring to fundamental probability does not help in achieving completeness. Your argument only reinforces this point because it also has nothing to do with probability. The trajectory could be either stochastic or deterministic and the problem is the same.
 
  • Like
Likes gentzen
  • #26
A. Neumaier said:
So you are interpreting 'complete' not in the sense of Einstein (EPR, 1935) but in the sense of Born and Heisenberg (Como 1928). These two senses are the extreme poles between which the various interpretations must find a place.

My just finished paper 'Quantum mechanics via quantum tomography' reconciles the two extremes. Probability in quantum physics gets the same status as probability in classical physics. This justifies Einstein's position while not changing the core of the position of Born and Heisenberg that quantum mechanics is a complete modeling framework for physics. But Born's view on probability is modified to accommodate the techniques of modern quantum information theory.
But probability in quantum physics has not the same status as probability in classical physics, and I think that's the whole point of all these discussions about "interpretations".

Classical physics is a deterministic description of the phenomena, i.e., all observables by definition take always determined values, and the statistical/probabilistic description in classical statistical physics is just due to the ignorance about the details of a macroscopic system with very many degrees of freedom.

How much can be ignored is to some extend our choice of description. The first (and usually most important) step in describing a macroscopic system is the choice of the "relevant macroscopic observables" and at which level you want to describe them. Usually you start from the Liouville equation for the ##N##-particle distribution function, which is a complete description of the classical system which has to be reduced to the effective statistical description via the chosen "relevant observables" by treating the microscopic degrees of freedom statistically and then coarse grain. One way is to derive the Boltzmann equation for the one-particle phase-space distribution function. The equation of motion results from the Liouville equation but then contains the two-particle correlation function (the equation for the ##n##-particle distribution function contains the ##(n+1)##-particle distribution function, building up the BBGKY hierarchy). Then you truncate the BBGKY hierarchy by the molecular-chaos assumption, factorizing the two-particle distribution function and neglect the piece describing two-particle correlations. Whether or not this is a good description you can only decide for each system under consideration (e.g., it's not a good description for a plasma, where you need to take the long-ranged Coulomb interaction into account, leading to the Vlasov(-Boltzmann) equation).

In quantum theory the probabilities enter the description on the fundamental level, and they do not enter the description of Nature due to our ignorance of the determined values of the observables, but the observables do not necessarily take determined values at all. That this is a valid description is demonstrated by the clear observation of the violation of Bell's inequalities, which are predicted for local deterministic hidden-variable theories. Of course one way out might be a nonlocal deterministic hidden-variable theory, but I haven't seen any convincing one yet, at least not taking into account relativity. For non-relativistic QM Bohmian mechanics might be considered as one non-local realization of such a picture.

For macroscopic systems the derivation of effective statistical theories is pretty similar to the classical case, and it remedies some conceptual problems of the classical theory using the notion of indistinguishability (bosons/fermions in 3 spatial dimensions) as well as the Planck constant as a natural measure for phase-space volums resolving the problems with entropy (Gibbs paradox, statistical derivation of the 3rd Law).

I haven't yet read your new paper in detail, but I don't think that the use of the more general description of measurements with POVMs changes much on the fundamental content of quantum theory, at least not if I'm allowed to use it with the physical meaning it has, e.g., in Peres's texbook which uses the usual probabilistic meaning of quantum states represented by statistical operators. I think that I can understand also your paper in this sense without running into contradictions, but then again I don't understand what's your interpretation of the POVM formalism on the fundamental/axiomatic level if I'm not allowed to interpret the symbols and there manipulations (particularly building the usual trace for the "quantum expectation values" with the statistical operator in its usual probabilistic meaning). Otherwise I think the POVM formalism is indeed a way to describe the measurements with real-world equipment more realistically.

What I also do not see is that a TPC really contradicts the standard interpretation of the measurements of position and momentum. What you really measure are indeed the "pointer readings", i.e., the "electric current signals" due to gas discharges of the particle "along its track". There is a "track" in the same sense as there is one in a cloud chamber, and it's emergence is explained by standard quantum mechanics as detailed by Mott. What you measure are thus the positions and times of the "electric current signals", and positions (like "vertices of particle decays") are resolved within the resolution of the device, and energy and momenta are inferred from these "position-time measurements" from the curvature of the tracks due to the applied magnetic field.
 
  • #27
Demystifier said:
You are right that the Naimark's theorem does not guarantee that the larger Hilbert space is physical, but no explicit counterexample has ever been found.
No counterexample has ever been looked for. No physical Hilbert space has ever been constructed.

Moreover, starting from a physical Hilbert space, one does not recover the same Hilbert space by contracting several projective measurements, concatenating the resulting quantum instruments, and constructing the ancilla for the resulting POVM. Thus the construction is physically spurious.
 
  • Like
Likes bhobba, gentzen and vanhees71
  • #28
vanhees71 said:
But probability in quantum physics has not the same status as probability in classical physics [...] In quantum theory the probabilities enter the description on the fundamental level
Only according to some interpretations. The minimal interpretation that you advocate in your lecture notes is completely silent about how fundamental quantum probabilities are. It simply takes it as the starting point without arguing that it must be so. Nonminimal interpretations may treat probability as irreducible or as emergent.
vanhees71 said:
I haven't yet read your new paper in detail, but I don't think that the use of the more general description of measurements with POVMs changes much on the fundamental content of quantum theory, at least not if I'm allowed to use it with the physical meaning it has,
I think you should read my paper in sufficient detail before arguing against one of its main conclusions.
 
  • Like
Likes dextercioby
  • #29
Are you trying to "derive" the Hilbert-space structure of QM from some notion of POVMs? What is then the underlying axiomatics? I've no idea, how to define POVMs without assuming the Hilbert-space structure to begin with. What's the goal of such an endeavor? Is it possible to make it also digestible for physicists or is it a purely mathematical "l'art pour l'art"?
 
  • #30
vanhees71 said:
Are you trying to "derive" the Hilbert-space structure of QM from some notion of POVMs? What is then the underlying axiomatics?
I answer this in the thread for my paper.
 
  • #31
A. Neumaier said:
Moreover, starting from a physical Hilbert space, one does not recover the same Hilbert space by contracting several projective measurements, concatenating the resulting quantum instruments, and constructing the ancilla for the resulting POVM. Thus the construction is physically spurious.
I don't really understand what you are saying here. Can you elaborate a bit, or give a reference where this is explained in more detail?
 
  • #33
vanhees71 said:
... demonstrated by the clear observation of the violation of Bell's inequalities, which are predicted for local deterministic hidden-variable theories. Of course one way out might be a nonlocal deterministic hidden-variable theory, but I haven't seen any convincing one yet, at least not taking into account relativity.
For a non-local theory, taking into account relativity means accepting a preferred frame. For SR this is simply going back to the pre-Minkowski interpretation used by Lorentz and Poincare. How to generalize this to gravity is also known, that's Schmelzer's generalized Lorentz ether.

Once a preferred frame is accepted as unproblematic, the remaining question is the choice of the configuration space. Here, a standard reference is

Bohm.D., Hiley, B.J., Kaloyerou, P.N. (1987). An ontological basis for the quantum theory, Phys. Reports 144(6), 321-375

The field ontology for the bosons I find convincing. The standard relativistic field-theoretic Langrangian can be used, so that relativistic effects are not a problem at all.
 
  • Like
Likes Demystifier
  • #34
Sunil said:
For SR this is simply going back to the pre-Minkowski interpretation used by Lorentz and Poincare.
I believe that @vanhees71 thinks that it's obviously wrong, despite the fact that it makes the same measurable predictions as the standard Einstein-Minkowski interpretation of SR.
 
  • #35
I lost the thread, what you are referring to, but you said above that the Bohmian reinterpretation of QED is gauge dependent and thus acausal, i.e., it is intrinsically inconsistent. Violating a local gauge symmetry makes the corresponding QFT always intrinsically inconsistent, and of course that's a serious flaw. Pauli would have said "it's not even wrong".
 

Similar threads

  • Quantum Interpretations and Foundations
2
Replies
61
Views
4K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
2K
  • Sticky
  • Quantum Interpretations and Foundations
Replies
1
Views
4K
  • Quantum Interpretations and Foundations
Replies
3
Views
2K
  • Quantum Interpretations and Foundations
Replies
25
Views
1K
  • Quantum Interpretations and Foundations
3
Replies
76
Views
4K
  • Quantum Interpretations and Foundations
4
Replies
115
Views
11K
  • Quantum Interpretations and Foundations
3
Replies
89
Views
6K
  • Quantum Interpretations and Foundations
Replies
3
Views
2K
Back
Top