I Are there signs that any Quantum Interpretation can be proved or disproved?

  • #251
As I said, I give up. I don't understand your definition of what's observable and how it's measured within your reinterpretation of quantum theory. I cannot understand the expression "standard deviation", if I'm not allowed to understand it in the usual statistical meaning. It's also clear that any physics is about reproducible observations and measurements. This is a prerequisite particularly in the standard statistical interpretation.
 
Physics news on Phys.org
  • #252
vanhees71 said:
I cannot understand the expression "standard deviation", if I'm not allowed to understand it in the usual statistical meaning.
It has the usual statistical meaning, since - as all statistics - it is applied to measurement results.

I don't discard statistics; I just remove it from the foundations. Just as there are many classical random phenomena but statistics has no place in the foundations of classical mechanics, so there are many quantum random phenomena but statistics should have no place in the foundations of quantum mechanics.
 
  • #253
But then there is no physical meaning in the formalism anymore, and at least I fail to understand, where you introduce this physical meaning again.

There is no statistics in the foundations of classical physics, because classical physics is formulated as a deterministic theory. In the standard interpretation the resolution of the discrepancies of the classical description with observation regarding "quantum phenomena" is that one postulates generic randomness to the description of Nature. All attempts for more than 100 years to get back to a deterministic description so far failed, and for sure you cannot achieve this by simply telling there is no statistical foundation without any operational substitute for it. As you've just admitted, when it comes to phenomenology and thus the operational definition of observables in relaation to your formalism you have to introduce the standard statistical meaning again. So I don't understand, why one should not clearly state the statistical meaning of the formalism from the very beginning.
 
  • Like
Likes physicsworks
  • #254
vanhees71 said:
There is no statistics in the foundations of classical physics, because classical physics is formulated as a deterministic theory.
There is no statistics in the foundations of quantum physics, because quantum physics is formulated as a deterministic theory, according to the thermal interpretation.
vanhees71 said:
All attempts for more than 100 years to get back to a deterministic description so far failed
But you discount my successful attempt without trying to understand it from that perspective. You only try to understand it from your statistical perspective.

vanhees71 said:
As you've just admitted, when it comes to phenomenology and thus the operational definition of observables in relation to your formalism you have to introduce the standard statistical meaning again.
I introduced it only in those cases (case 2.) where there are enough and random enough observations to apply standard statistical methods. In such cases you also need it in classical mechanics, and as in the classical case it is introduced a posteriori and not in the foundations.

Macroscopic quantities are the only things that are directly measured, hence the only things that need an operational meaning. For macroscopic quantities you don't need statistics to give the quantum expectations an operational meaning. Microscopic quantities are measured only indirectly (i.e., whatever we know about them is inferred from a large number of macroscopic observations), hence can be assigned an operational meaning in a second step, after having introduced statistics in the same way as in classical physics.

vanhees71 said:
So I don't understand, why one should not clearly state the statistical meaning of the formalism from the very beginning.
Since you understand why one should not clearly state the statistical meaning of classical mechanics
from the very beginning you can proceed by analogy. The thermal interpretation works in exactly the same way.
 
  • #255
atyy said:
As Bohr said: "It is wrong to think that the task of physics is to find out how Nature is. Physics concerns what we say about Nature."
Yeah. So physics must be seen as a part of linguistics. More precisely, of socio-linguistics.
 
  • #256
WernerQH said:
Yeah. So physics must be seen as a part of linguistics. More precisely, of socio-linguistics.
If it proves that the world isn't fundamentally deterministic and causal, physics could even be seen as part of psychiatry or psychology.
 
  • Like
Likes AlexCaledin
  • #257
These arguments will go in circles forever, obviously because something is missing from the picture. If I showed a cell phone to an aborigen, he would say that the voice of someone on another continent emanating from the phone is emergent because he'd not be able to identify all the parts and processes going on in the phone. Much like we can't interpret reality based on the limited knowledge that is currently available. It could well be that there are processes, fields, etc. that we haven't detected yet to be able to interpret comprehensively how reality works.
Based on what we currently know, most of what we see around us would be "emergent"(matter, space, time, causality, determinism, even consciousness). :H But in the end, we could be the aborigen standing next to a cell phone unaware of the existence of EM waves.
 
  • Like
Likes er404
  • #258
EPR said:
But in the end, we could be the aborigen standing next to a cell phone unaware of the existence of EM waves.
... unaware of the existence of advanced EM waves.
 
  • #259
vanhees71 said:
The problem also is he is contradicting himself all the time. Just in his last posting all of a sudden he admitted that he needs probabilities to define his q-expectations. All the time he denied that this standard definition is abandoned in this thermal interpretation.
Well, I should say something to this, but what should I say? And I should not wait forever, otherwise it gets even more awkward.

A. Neumaier is not contradicting himself, but there is a strange paradox: Due to the way the thermal interpretation avoids adding unnecessary cruft and clarifies which concepts are possible encodings (wavefunctions, density matrices, ...) and which concepts are more fundamental than mere encodings (q-expectations and q-correlations, their space-time dependence, ...), it is inherently suitable to enable an instrumentalist like me to make simpler computations and better justifiable approximations. So it would be ideally suited for the “let me calculate and explain” approach I liked so much in the writings of Roland Omnès and Robert B. Griffiths about consistent histories. But instead, A. Neumaier sometimes skips even simple calculations (like describing the quantum state in the Stern-Gerlach experiment immediately before the detection interaction) that would seem helpful (from my perspective) to make his explanations easier to grasp. I admit that he gives helpful references to existing papers doing such calculations, but of course those papers won't use the thermal interpretation to simplify their calculations (or justify their approximations).

And in cases where he doesn't try to properly understand what you wrote and instead gives a response he has given in similar form thousand times before, he risks indeed to accidentally contradict himself, and to contribute to the impression of going round in circles.

The arguments get in circles for years, and it seems to be impossible to communicate even the problem with abandoning probabilities from the interpretation.
Well, if your goal is to disprove the thermal interpretation, then I don't get what you expect A. Neumaier to do. I wouldn't even say that he tries to abandon probabilities in his interpretation. He just aims for an ignorance interpretation of probability (like in de Broglie-Bohm), and tries to avoid one circularity that could arise in a (virtual) frequentist interpretation of probability (with respect to uncertainties). Let me quote from section "4. A view from the past" from an article about earthquake prediction which I have reread (yesterday and) today:
We will suppose (as we may by lumping several primitive propositions together) that there is just one primitive proposition, the ‘probability axiom,’ and we will call it A for short. ...
Now an A cannot assert a certainty about a particular number n of throws, such as ‘the proportion of 6’s will certainly be within p ± ϵ for large enough n (the largeness depending on ϵ)’. It can only say ‘the proportion will lie between p ± ϵ with at least such and such probability (depending on ϵ and n0 ) whenever n > n0 ’. The vicious circle is apparent.
 
  • Like
Likes timmdeeg
  • #260
So far for me the new interpretation lacks precisely an interpretation of the math. There seems to be no physical, operational interpretation of the expectation values and correlations anymore, because the standard probability interpretation is explicitly negated and not substituted by anything new.

I don't want to disprove the thermal interpretation. I just want to understand it. For me it is not a physical theory, before it's not clearly stated which meaning the formalism has in connection with observations and measurements in the a lab.

What is for sure wrong is the assumption that what a measurement device measures is always a q-expectation value.
 
  • Like
Likes gentzen
  • #261
vanhees71 said:
What is for sure wrong is the assumption that what a measurement device measures is always a q-expectation value.
The theory talks about q-expectation values, so what needs to be compared with available (or future) observations are "things" that can be derived from those (space and time dependent) q-expectation values. Those "things" are functions of the q-expectation values, where "function" can include averaging over space and time, to take the limited resolution of measurement devices into account. It could also include nonlinear functions of many different (averaged) q-expectation values. So far, so good.

Where it might become objectionable is when A. Neumaier wants to compute statistics of the available observations before the comparison, in case those observations are mostly random like individual silver spots on the screen for a Stern-Gerlach experiment. His argument is that the individual spot is not reproducible, only the statistics of those spots is. And at this point you object and say that you no longer see a difference to the minimal statistical interpretation.

The argument that he would accept something reproducible like a temperature, an electric current, or other macroscopic variables without requiring statistics seems not convincing, because after all a silver spot is also a macroscopic observable, and repeated measurements of properties of an individual silver spot would probably be reproducible. But that doesn't count, because ...

I won't try to convince you. You already stated that you find that whole business confusing and unsatisfactory. Also, I should not try to talk for A. Neumaier, because that would only propagate my own misunderstandings. And if I talk for myself, detailed properties of the q-expectations interest me more than whether measuring silver spots is reproducible or not. What interests me for example is how much gauge-freedom is still left in the q-expectations, whether taking functions of the q-expectations is sufficient for removing all remaining gauge-freedom, how specific q-correlations can be observed, whether certain q-correlations are similar to evanescent modes in being not really directly observable, and stuff like that. And I am interested in interpretations of probabilities, and resolution of the corresponding paradoxes and circularity issues. And I am interested in randomness, because there is no such thing as perfect randomness (or objective randomness), at least that is my guess.
(Sorry for the long reply, and thanks for answering me. I should not overstretch your friendliness too much by going on and on and on in circles.)
 
  • #262
gentzen said:
Where it might become objectionable is when A. Neumaier wants to compute statistics of the available observations before the comparison, in case those observations are mostly random like individual silver spots on the screen for a Stern-Gerlach experiment. His argument is that the individual spot is not reproducible, only the statistics of those spots is. And at this point you object and say that you no longer see a difference to the minimal statistical interpretation.
When I'd be allowed to interpret the expectation values in the usual probabilistic way, there'd be no problem with that, because you can calculate all moments and this reproduces uniquely the probability distribution for finding a specific value when measuring the observable, which is all I can know about this measurement before doing the measurement given the state the system is prepared in.
gentzen said:
The argument that he would accept something reproducible like a temperature, an electric current, or other macroscopic variables without requiring statistics seems not convincing, because after all a silver spot is also a macroscopic observable, and repeated measurements of properties of an individual silver spot would probably be reproducible. But that doesn't count, because ...
But also macroscopic observables show in principle quantum fluctuations, which are however almost always way too small to be significant or even detectable within the accuracy needed to observe them. There are of course also exceptions. E.g., besides there great success in measuring gravitational waves from astronomical sources, gravitational wave detectors are of a precision, where quantum fluctuations of macroscopic objects (the quite heavy mirrors of the Michelson interferometer) can be observed.
gentzen said:
I won't try to convince you. You already stated that you find that whole business confusing and unsatisfactory. Also, I should not try to talk for A. Neumaier, because that would only propagate my own misunderstandings. And if I talk for myself, detailed properties of the q-expectations interest me more than whether measuring silver spots is reproducible or not. What interests me for example is how much gauge-freedom is still left in the q-expectations, whether taking functions of the q-expectations is sufficient for removing all remaining gauge-freedom, how specific q-correlations can be observed, whether certain q-correlations are similar to evanescent modes in being not really directly observable, and stuff like that. And I am interested in interpretations of probabilities, and resolution of the corresponding paradoxes and circularity issues. And I am interested in randomness, because there is no such thing as perfect randomness (or objective randomness), at least that is my guess.
(Sorry for the long reply, and thanks for answering me. I should not overstretch your friendliness too much by going on and on and on in circles.)
I don't know what you mean by "gauge freedom". I also don't see circularity issues with probabilities in the standard minimal interpretation of QT. It's just the basic assumption about the meaning of the quantum state, described by the statistical operator (or generalizing this concept to POVMs which seem to be very important in the thermal interpretation as defining irreducible postulates, but as long as I'm allowed to use the standard probabilistic interpretation of the state that's not a problem but just an extension to the description of non-ideal von Neumann meausrements) to be probabilistic and only probabilistic as described by the postulates of QT and particularly Born's rule (or the corresponding extension in the POVM formalism).

Whether or not there is objective randomness is of course very challenging, even if you are allowed to use the clear standard interpretation of QT. According to what we know today, I'd say it's pretty sure to have objective randomness in Nature, because the violation of Bell's inequality and the confirmation of standard local QED in all these Bell tests with photons I'd say that the assumption of deterministic hidden variables responsible for the randomness of the observables is ruled out within our contemporary experience with all the successful relativistic descriptions of Nature, which are all local, and there is no satisfactory non-local reinterpretation like Bohmian mechanics for nonrelativistic quantum mechanics. Of course we don't have any hard "proof" (proof in the sense of natural science, of course, not in the sense of mathematics) for that assumption, because maybe one day some clever physicists finds such a non-local determinstic description compatible with the causality structure of relativistic spacetime. From what we know however today, there is not the slightest necessity for such a theory, because there is not a single observation hinting at something like this.
 
  • #263
vanhees71 said:
I don't know what you mean by "gauge freedom".
Well, I mean things like the global phase of a wavefunction, or the reference zero energy for a potential. (However, the real gauge freedom is actually the local phase of the wavefunction, so I am not sure what getting rid of the global phase will change.) Or the freedom of a vector potential compared to the electromagnetic fields themselves. If q-expectations are used instead of wavefunctions, then the global phase is no longer there. But maybe other similar degrees of freedom are still there.
vanhees71 said:
I also don't see circularity issues with probabilities in the standard minimal interpretation of QT.
The circularity is not related to QT or the standard minimal interpretation. It just means that if you try to define probability via frequencies, then your definition for what it means in practice for finitely many "measurements" might implicitly already use probabilities.

vanhees71 said:
but as long as I'm allowed to use the standard probabilistic interpretation of the state
The interpretation of the state is indeed different in the thermal interpretation, so it would no longer be the thermal interpretation if you use the standard probabilistic interpretation of the state.
It would be easiest for me to explain this by contrasting it to the corresponding interpretation of the state in QBism, and by explaining why I prefer the thermal interpretation of the state.

vanhees71 said:
I'd say it's pretty sure to have objective randomness in Nature, because
I agree that there is randomness in nature. But it doesn't need to be perfect, it just needs to be good enough to prevent exploiting the non-local randomness observed in Bell-type experiments for faster than light signaling / communication. So by objective randomness, I mean a mathematically perfect randomness, and when I say that believe that there is no such thing as perfect randomness (or objective randomness), I mean that it is not necessary to postulate its existence for making sense of QT and Bell-type experiments.
 
  • #264
gentzen said:
Well, I mean things like the global phase of a wavefunction, or the reference zero energy for a potential. (However, the real gauge freedom is actually the local phase of the wavefunction, so I am not sure what getting rid of the global phase will change.) Or the freedom of a vector potential compared to the electromagnetic fields themselves. If q-expectations are used instead of wavefunctions, then the global phase is no longer there. But maybe other similar degrees of freedom are still there.
These quibbles are resolved by defining statistical operators as the representants of states. Then the "phase gauge freedom" goes away in the sense that you use "gauge invariant" descriptions. This has nothing to do with interpretation but is a well-understood part of the formalism, and it's very important to remember that not "wave functions" represent (pure) states but the corresponding statistical operator or, equivalently, unit rays in Hilbert space. Without this you couldn't do non-relativistic QM by the way, because only central extensions of the Galileo group lead to physically meaningful dynamics (Wigner, Inönü).
gentzen said:
The circularity is not related to QT or the standard minimal interpretation. It just means that if you try to define probability via frequencies, then your definition for what it means in practice for finitely many "measurements" might implicitly already use probabilities.
Exactly. That's why I don't understand that in the thermal interpretation I have to abandon the probabilistic interpretation without any substitute for it to connect the formalism to the "lab".
gentzen said:
The interpretation of the state is indeed different in the thermal interpretation, so it would no longer be the thermal interpretation if you use the standard probabilistic interpretation of the state.
It would be easiest for me to explain this by contrasting it to the corresponding interpretation of the state in QBism, and by explaining why I prefer the thermal interpretation of the state.
The problem is that nobody explained to me how to relate the q-expectation values to experiment. I don't care whether you interpret probabilities in the frequentist or Bayesian way. The Qbists also couldn't explain to me, how their interpretation relates to real-world observations either.
gentzen said:
I agree that there is randomness in nature. But it doesn't need to be perfect, it just needs to be good enough to prevent exploiting the non-local randomness observed in Bell-type experiments for faster than light signaling / communication. So by objective randomness, I mean a mathematically perfect randomness, and when I say that believe that there is no such thing as perfect randomness (or objective randomness), I mean that it is not necessary to postulate its existence for making sense of QT and Bell-type experiments.
I think the only way to realize what you call "perfect" or "objective" randomness is provided QT measurements since to the best of our knowledge these are objectively random events. The perfect unpolarized single-photon
source is to produce the Bell-singlet state by parametric downconversion. Then the single-photon polarization are with certainty maximally uncertain and the single photons are perfectly unpolarized.
 
  • #265
vanhees71 said:
These quibbles are resolved by defining statistical operators as the representants of states. Then the "phase gauge freedom" goes away in the sense that you use "gauge invariant" descriptions. This has nothing to do with interpretation but is a well-understood part of the formalism,
My "quibbles" started with the "question" whether using statistical operators instead of wave functions will make all "gauge freedom" go away. Your "answer" that the "phase gauge freedom" goes away is misleading, because for example the reference zero energy for a potential doesn't go away and remains important.
It may be "a well-understood part of the formalism" for you, and I agree that it obviously should not hold any deep secrets. But I still don't fully understand it, not even in the simpler optics context. Countless times I computed time averaged Poynting vectors, when there were differing opinions on whether some normalization or some computation result or effect were "correct" or a "bug". The actual normalizations or effects were often much simpler than those Poynting vector computations, but I don't know whether (or how) I could have avoided them.
Something responsible for this type of confusion and hard to resolve debates does have connections to interpretation in my book.

vanhees71 said:
and it's very important to remember that not "wave functions" represent (pure) states but the corresponding statistical operator or, equivalently, unit rays in Hilbert space. Without this you couldn't do non-relativistic QM by the way, because only central extensions of the Galileo group lead to physically meaningful dynamics (Wigner, Inönü).
I have now read On the Contraction of Groups and Their Representations by E. Inonu and E. P. Wigner (1953). It reminded me of something I had read previously, namely Missed opportunities by Freeman J. Dyson (1972). So if I understood it correctly, some representations of the Galilei group arise as contractions of representations of the Lorentz group, and only those representations lead to physically meaningful dynamics. And the structure of the contracted part of the representation is that of a central extension.
 
  • Like
Likes vanhees71 and dextercioby
  • #266
gentzen said:
if I understood it correctly, some representations of the Galilei group arise as contractions of representations of the Lorentz group, and only those representations lead to physically meaningful dynamics. And the structure of the contracted part of the representation is that of a central extension.
... of projective representations of the inhomogeneous Lorentz group = Poincaré group
 
Last edited:
  • Like
Likes vanhees71
  • #267
gentzen said:
My "quibbles" started with the "question" whether using statistical operators instead of wave functions will make all "gauge freedom" go away. Your "answer" that the "phase gauge freedom" goes away is misleading, because for example the reference zero energy for a potential doesn't go away and remains important.
This I don't understand. Where is the absolute reference of the energy observable in your opinion? The physics doesn't change by using an Hamiltonian
$$\hat{H}'=\hat{H}+E_0 \hat{1}$$
instead of ##H##. In the Schrödinger picture the time-evolution operators are
$$\hat{U}=\exp(-\mathrm{i} \hat{H} t)$$
and
$$\hat{U}'=\exp(-\mathrm{i} \hat{H}' t) = \exp(-\mathrm{i} E_0 t) \exp(-\mathrm{i} \hat{H} t).$$
The time evolution of the state is
$$\hat{\rho}(t)=\hat{U}(t) \hat{\rho}(0) \hat{U}^{\dagger}(t)=\hat{U}'(t) \hat{\rho}(0) \hat{U}^{\prime \dagger}(t).$$
So there's no change in the dynamics of the state (but of course for the state ket of a pure state, for which the state then of course is ##|\psi(t) \rangle \langle \psi(t)|##, which is again independent of ##E_0##).
 
  • #268
vanhees71 said:
This I don't understand. Where is the absolute reference of the energy observable in your opinion? The physics doesn't change by using an Hamiltonian ...
Of course the physics doesn't change, that is exactly why it is called "gauge freedom". But if I simulate electron matter interaction in the context of scanning electron microscopy, then we talk about the energy of the electrons. And the zero energy reference changes, it is different in vacuum from the reference inside a material. Even worse, for interactions with inner shell electrons, it is different again during the interaction. The trouble is that the details which is the "correct" reference energy can be tricky. One the one hand, it is just a convention. On the other hand, often it is important to determine the correct kinetic energy of the electrons for the concrete interactions. And then you get into "dangerous" discussions where you both risk being the one who is wrong, but also risk being the one who would have been correct, but failed to convince the others.

But when I say that I don't fully understand it, I mean something simpler. Concrete computations are done with concrete zero reference energy and other concrete gauge fixings. And of course the potential is reported directly, and it doesn't seem to cause any problems. The context is enough to clarify its meaning, even its absolute value. But will this also be the case for other gauge dependent magnitudes, or is the potential an exception.
 
  • #269
Do you have a concrete example, where the choice of the absolute reference point of energy leads to problems? I still don't understand what you mean.

Of course, what's observable are only gauge-invariant properties, and you have to carefully define them and ensure that you don't violate gauge invariance (in the stricter sense of choosing a gauge for gauge fields as, e.g., the electromagnetic field in atomic physics).

The em. potentials are not observables. They lack already the very basic microcausality property. What's observable are gauge-independent quantities like the energy-momentum tensor of the em. field, fulfilling the microcausality principle.
 
  • #270
vanhees71 said:
Do you have a concrete example, where the choice of the absolute reference point of energy leads to problems? I still don't understand what you mean.
Let me be clear that the sense in which "the absolute reference point of energy leads to problems" is of the type "And then you get into "dangerous" discussions where you both risk being the one who is wrong, but also risk being the one who would have been correct, but failed to convince the others."

I am not sure how helpful my concrete example will be for helping you understand what I mean. Some concrete example were "ionization energies for inner shells", "reference energy during interactions with inner shell electrons", and "quantum surface transmission". But if I would try to explain them, then first of all this would take quite some background, but after that I might risk to just have those same "dangerous" discussions again, this time with you.

Anyway, let me try to explain the issue with the "ionization energies for inner shells". The ionization energies for outer shells of the material model are measured or calibrated, and that also includes the workfunction. But the ionization energies (and ionization cross sections) for inner shells are taken from precomputed databases for free atoms. If you would assume that their zero reference energy was the vacuum level, then the workfunction would become surprisingly important. However, surface contamination can easily change the workfunction completely. Additionally, the potential from the free atoms "in the long range" is shielded inside a material by the other electrons, additionally questioning whether taking the vacuum level as reference is a good idea.

vanhees71 said:
Of course, what's observable are only gauge-invariant properties
Maybe, but I don't see why the q-observables by themselves will necessarily be gauge-invariant (or that using statistical operators will help me in this respect). I mean, even the Hamiltonian you wrote down to demonstrate to me that the physics doesn't change is a q-observable. It is the energy, but of course the actual value of the energy depends on the zero reference.
 
  • #271
I'm not familiar with measurements of the ionization energies. Of course, you have to define the choice of the "zero point" of you energies you measure since what you measure are always energy differences.

What is a "q-observable"?

It's of great importance to understand that the Hamiltonian in general is not gauge invariant and not an observable. That's so already in classical physics when using the Hamilton formulation of the action principle for a particle in an external electromagnetic field. The Hamiltonian contains the electromagnetic potentials and thus is not gauge invariant. For a nice explanation of the issue in the quantum context, see

Donald H. Kobe and Arthur L. Smirl, Gauge invariant formulation of the interaction of electromagnetic radiation and matter, Am. Jour. Phys. 46, 624 (1978)
https://doi.org/10.1119/1.11264
 
  • Like
Likes gentzen
  • #272
vanhees71 said:
It's of great importance to understand that the Hamiltonian in general is not gauge invariant
Very good, so my impression is that you understood my problem, and I understood your position. Whether or not I used words in a way that seems inappropriate to you is not important ("the Hamilatonian in general is ... not an observable"), because my focus is often less on individual words, but more on the concrete stuff.

vanhees71 said:
What is a "q-observable"?
This is defined at the end of subsection 2.2 Properties in Foundations of quantum physics II. The thermal interpretation as
A subsystem of a system is specified by a choice declaring some of the quantities (q-observables) of the system to be the distinguished quantities of the subsystem. This includes a choice for the Hamiltonian of the subsystem.
If you are not familiar with that paper, looking at equations (6), (7), and (8) in section "2.1 The Ehrenfest picture of quantum mechanics" could be helpful for understanding in which sense I feel that q-expectations share many of the good properties of statistical operators. I once wrote:
The formulation of QM in section "2.1 The Ehrenfest picture of quantum mechanics" via (6), (7), and (8) shows another interesting advantage of using the collection of q-expectation as state instead of the density operator. That presentation unifies the Schrödinger, Heisenberg, and Dirac picture, but the density operator itself is different in each picture. That presentation even unifies classical and quantum mechanics.

However, that unification may be treacherous. It doesn't just include classical mechanics, but also classical mechanics with epistemic uncertainty about the state of the system. But in classical mechanics, there is a clear way to distinguish a state with epistemic uncertainty from a state without. In quantum mechanics, people tried resorting to pure states to achieve this distinction. But the thermal interpretation explicitly denies pure states this privilege, and explains well why it is important to deny pure states any special status.

vanhees71 said:
For a nice explanation of the issue in the quantum context, see

Donald H. Kobe and Arthur L. Smirl, Gauge invariant formulation ...
Thanks for the reference. I will have a look. Maybe it will indeed improve my understanding of "gauge freedom" and its impact on results from concrete computations.
 
  • #273
I don't understand A. Neumairs interpretation, because it takes the only interpretation which makes contact to real-world experiments away (the probabilistic interpretation of the state, described by the statistical operator) and doesn't provide a new reinterpretation for the Born rule, calling just the usual expectation value, ##\langle O \rangle =\mathrm{Tr}(\hat{\rho} \hat{O})## (expectation value in the usual meaning defined by probability theory) "q-expectation value". If there is no probabilistic interpretation allowed, it's not clear how to relate this mathematical formal object to real-world objects dealt with in experiments.

All this has nothing to do with the concrete question of gauge invariance. I think the cited paper by D. H. Kobe (and the many great references therein, particularly the ones by Yang) is bang on your problem.

It's a great exercise to think about the motion of a charged particle in a homogeneous magnetic field leading to the famous Landau levels and formulate it in a gauge invariant way (the energy eigenvalue problem can be completely solved for the gauge-invariant observable probabilities).

Another very good source is also the textbook by Cohen-Tanoudji, Quantum Mechanics, Vol I, Complement H III.
 
Last edited:
  • #274
vanhees71 said:
So what's literally measured is the position of the Ag atoms when hitting this screen.
vanhees71 said:
If there is no probabilistic interpretation allowed, it's not clear how to relate this mathematical formal object to real-world objects dealt with in experiments.
The thermal interpretation gives a deterministic interpretation for q-expectations of macroscopic quantities, which takes account of all measured currents, pointer readings, dots on a screen, counters, etc.. This covers everything that is literally measured, in the sense of the first quote. This relates the quantum formalism to a large class of real-world objects dealt with in experiments.

In addition, there is a derivation of the probabilistic interpretation for q-expectations of microscopic quantities (namely as statistical expectation values), so this has not to be postulated (but neither is it forbidden).

Thus everything of experimental interest is covered. I don't understand why you object.
 
  • #275
How then do you explain the observed fact that the observed position of the Ag atom is hitting the screen is random (provided the initial Ag atoms are not prepared in eigenstates of the spin component under investigation)? The single atom doesn't land around the expectation value (in the usual probabilistic sense, because I don't understand yet the instrumental meaning of your q-expectation values) with some uncertainty (again in the usual probabilistic sense of a standard deviation) but around two spots. The demonstration of this "direction quantization" was the great achivement of the experiment!

I don't object, I only need an instrumental understanding. If you now say you can derive the usual probabilistic interpretation and accept it as the instrumental understanding of the formalism, I don't know, why you all the time negate the validity of the standard probabilistic view. Understood in this way your thermal interpretation is just using another set of postulates to get back the same quantum theory we have since 1926. If this is now an understanding acceptable for you, the only point I have to understand then is why you insist on the collapse of the state as something outside the formalism but necessary for its interpretation.
 
  • #276
vanhees71 said:
How then do you explain the observed fact that the observed position of the Ag atom is hitting the screen is random (provided the initial Ag atoms are not prepared in eigenstates of the spin component under investigation)? The single atom doesn't land around the expectation value (in the usual probabilistic sense
This quantum is explained in the same way as the observed classical fact that the observed values when casting a die are integral although the expectation values are not. But the expectation values of the powers (the moments) allow one to reconstruct the complete probability distribution, and this reveals that the individual values cast are just 1,...,6.

vanhees71 said:
If you now say you can derive the usual probabilistic interpretation and accept it as the instrumental understanding of the formalism, I don't know, why you all the time negate the validity of the standard probabilistic view.

I was never negating the validity of the standard probabilistic view. I just removed it from the foundations! I accept the usual probabilistic interpretation not as the instrumental understanding of the formalism in general but only as the instrumental understanding in cases where one actually measures a whole probability distribution!
vanhees71 said:
If this is now an understanding acceptable for you,
Not yet, because you want to have it as foundation, whereas I want to have it as consequence of more natural (and more complete) foundations.
vanhees71 said:
the only point I have to understand then is why you insist on the collapse of the state as something outside the formalism but necessary for its interpretation.
The collapse is not an assumption in the thermal interpretation, it is just a frequently (but as you correctly remark, not always) observed fact. I insist on its presence only because the collapse cannot be derived from your minimal statistical interpretation but is needed in practice, and hence shows that the minimal statistical interpretation is incomplete, hence too minimal.
 
  • #277
I don't care what you take as the postulates. As long as you end up with a connection between the formalism to the observations that successfully describe the observations. Of course if you have all moments of a distribution, you have the distribution. The important point of the instrumental meaning just is that it's a probability distribution. So it seems to be settled that I can read your q-expectation values simply in terms of the standard interpretation of probabilities defined in the usual way as QT does since 1926.

Of course can the collapse not be derived, because it's by assumption outside the formal description. It has no foundation whatsoever. In ther case of relativistic QFT it's even contradicting its very foundations resting on locality/microcausality.

I don't know, why you insist on its necessity, because I don't see, where you need it. I also don't see, why the minimal statistical interpretation should be incomplete and your interpretation be complete. I thought at the end it's simply equivalent (as soon as I'm allowed to give your mathematical operations, particularly the Born rule for calculating your q-expectation vlaues the standard porbabilistic meaning as expectation values).
 
  • #278
vanhees71 said:
The important point of the instrumental meaning just is that it's a probability distribution.
Only when you have many copies of the same microscopic system.

But for a single ion in an ion trap, probabilities are instrumentally meaningless since to measure probabilities you need many copies!
 
  • #279
vanhees71 said:
I also don't see, why the minimal statistical interpretation should be incomplete and your interpretation as complete.
vanhees71 said:
Of course can the collapse not derived, because it's by assumption outside the formal description.
Well if the minimal statistical interpretation were complete, the formalism together with this interpretation should allow the derivation of the collapse, or in greater generality, should allow one to predict from the microscopic description of a filter how individual input states are transformed into individual output states. This is the measurement problem.
 
  • #280
Again: The achievement of the 2012 Nobelists is that they can use one atom/photon. That doesn't mean that there is not the usual meaning of probability concerning the observables on this one atom/photon. They just repeat the same measurement with the one individual atom/photon. I can use one and the same dice and throw it repeatedly to get the probability for the outcomes. I don't need to use different dices (indeed for macroscopic objects these are strictly speaking two different random experiments, because the dices never are exactly identical, while sufficiently simple "quantum objects" are).

Since the predictions of QT are probabilistic you have to do that to be able to gain "enough statistics" to compare your probabilistic predictions with the statistics of the measurement outcomes.
 
  • #281
vanhees71 said:
That doesn't mean that there is not the usual meaning of probability concerning the observables on this one atom/photon.
They interpret the results for one single atom. The only statistics they make is about the time series produced by this atom, and they draw conclusions for this atom.
vanhees71 said:
I can use one and the same dice and throw it repeatedly to get the probability for the outcomes.
But only if you cast the die in a random way, so that the eyes are independently distributed. But this is not the case in a continuous quantum measurement. The latter means that you measure the die many times while it falls, and stop the experiment after the die is at rest. Or that after you cast the first die you lift it carefully and put it down again to get the next value for the eyes. In both cases the probability for the outcome becomes meaningless.
vanhees71 said:
Since the predictions of QT are probabilistic you have to do that to be able to gain "enough statistics" to compare your probabilistic predictions with the statistics of the measurement outcomes.
Some predictions are probabilistic, some are not. If you have a single atom in a trap then the raw observations are noisy but not independent realizations of the same quantity; thus the minimal interpretation does not say anything meaningful. Also, the observations depend on the controls applied to the atom - just as when maniulating a die by hand during the measurements.

The observations are therefore not given by Born's rule but by the rules for an externally controlled quantum stochastic process. To be able to do this in a correctly predicted way was worth a Nobel price.
 
  • #282
Sure, but they use the "time series" (as you call it) to gain statistics. I don't see, how this contradicts the very general foundations of QT as formulated by our standard postulates only because I use a single atom to perform the same measurement repeatedly. Since atoms are exactly identical it doesn't make any difference whether I repeat the same experiment on the one individual atom or on several different identical atoms (all this of course stated within standard QT ;-)).
 
  • #283
vanhees71 said:
Sure, but they use the "time series" (as you call it) to gain statistics. I don't see, how this contradicts the very general foundations of QT as formulated by our standard postulates only because I use a single atom to perform the same measurement repeatedly.
Because in order that Born's rule is instrumentally meaningful you need identically prepared systems. Even for classical statistics, you need many instances to get meaningful frequentist (hence scientifically well-posed) probabilities.
vanhees71 said:
Since atoms are exactly identical it doesn't make any difference whether I repeat the same experiment on the one individual atom or on several different identical atoms.
To prepare a quantum system you always need to make it distinguishable from other, identical systems that are not prepared. Indeed, the atom in an ion trap is not indistinguishable, but is distinguished by the property of being in the trap, which has only place for one atom.

The point is that at each time measured, the ion is in a (at least potentially) different state, so a notion of 'identically prepared' cannot be applied. Though all atoms with the same number of protons, neutrons, and electrons identical in the sense of Bose statistics, the atoms are not identically prepared! The one you single out in an ion trap is very differently prepared from one in an ideal gas.
 
  • Like
Likes gentzen, mattt, dextercioby and 1 other person
  • #284
But in this experiment there are many identically prepared systems using one and the same molecules in a trap. I don't see, why an ensemble shouldn't be realized with one and the same system. That has nothing to do with quantum mechanics. You have the same in classical statistics. If you take a gas in a container it also consists of the same molecules all the time and the thermodynamical quantities (temperature, density, pressure,...) are understood as averages over many collisions of these same molecules.

I also don't understand why you say the ion is not prepared. To the contrary it's pretty sharply prepared being trapped in the trap. The laser exciting it is of course also part of the preparation procedure. Of course the single ion in the trap is not prepared as a thermal state here. I never claimed this.
 
  • #285
vanhees71 said:
That has nothing to do with quantum mechanics.
Exactly. And the irony is that A. Neumaier never denied this. One of his points is that the thermal interpretation solves this issue both for classical thermodynamics and for quantum mechanics. And his attempts to bring this point across to you is what helped me to finally get it. As I wrote: "Your current discussions with A. Neumaier helped me to see more clearly how the thermal interpretations resolves paradoxes I had thought about long before I heard about the thermal interpretation."
 
  • #286
Maybe then you can explain to me what the physical content of this interpretation is. I still don't get it from Neumaier's explanations, which are partially self-contradictory: One time he abandons the standard probabilstic meaning of his "q-expectation values" and then I'm lost, because then there's no physical meaning of the formalism left. Then he tells me again that it's still the same probabilistic meaning as in standard minimally interpreted QT, but then I don't see where's the difference between his and the standard QT interpretation.

Then there is the issue with doing experiments with a single ion in a trap. Neumaier seems to believe these cannot be describes within the standard minimal interpretation, but that's not right, because many people in this community of physicists work well with the standard QT and indeed what's measured here are probabilities or expectation values over many realizations of the experiment. That you use one and the same ion to realize these ensembles is no issue at all.
 
  • Like
Likes WernerQH
  • #287
vanhees71 said:
Then he tells me again that it's still the same probabilistic meaning as in standard minimally interpreted QT, but then I don't see where's the difference between his and the standard QT interpretation.
For the cases where the standard minimally interpreted QT applies, there is no significant difference between his and the standard QT interpretation.

vanhees71 said:
One time he abandons the standard probabilstic meaning of his "q-expectation values"
The name "q-expectation value" is reserved for the value computed by the model from the specific formula. The interpretation of those values is done separately. One reason for this is that not all values that the model can compute by such formulas will have a direct operational meaning in the real world.

vanhees71 said:
Maybe then you can explain to me what the physical content of this interpretation is.
Well, I wrote an explanation, but have now copied it away for the moment. I am not sure whether A. Neumaier would be happy if I tried, because he has written nicely polished articles and a nicely polished book where he explains it. Any explanation in a few words is bound to misrepresent his views, and additionally I should only speak for myself.

Let me instead remark how I see its relation to QBism: There you have talk about "agents", but what is an agent, and why should you care? In the thermal interpretation, there are no agents, but models are taken seriously on their own. In QBism, the agent uses QM as a cookbook to update his state. A model on the other hand naturally has a state, and doesn't need a cookbook to update it, the consistent evaluation of the state at arbitrary places in space and time is exactly what a model is all about.

But how do engineers and scientists use models to make predictions about the real world? Good question! Try to closely watch what they actually do, and try to not be mislead by their words about what they believe that they do!
 
  • #288
Ok, then I have to give up. I also don't understand Qbism as a physical interpretation of QT. I also don't see, where standard minimally interpreted QT should not apply (except for the unsolved problem of "quantum gravity", but that's not under debate here). If I watch what experimentalists do when they use QT, it's always within the standard probabilistic meaning of the quantum state.
 
  • #289
Isn't it odd that a theory that is almost 100 years old triggers such debates between two people who know it extremely well? It seems to disprove the idea that there is "no problem at all". The meaning of probability is being discussed to this day. In my opinion probability theory, just like geometry, is an indispensable ingredient of modern physical theories.

Is it necessary to emphasize that an ensemble has properties different from those of its members? An ensemble (average) can evolve smoothly and deterministically, but this need not be true for its members.

The purpose of an ensemble is to permit the statistical description of its members. And it is here where (I think) the deficiency of the statistical interpretation lies: It is too vague on what quantum theory is about, which properties the members of the ensembles have or do not have. It is not adequate to talk about quantum "objects" with conflicting properties, or properties that do not exist at all times.

An ensemble need not be physical. It doesn't need to have as many members as there are molecules in a volume of gas. As Willard Gibbs has shown, it is sufficient for our calculations that we can imagine it.
 
  • Like
Likes gentzen and vanhees71
  • #290
WernerQH said:
The purpose of an ensemble is to permit the statistical description of its members. And it is here where (I think) the deficiency of the statistical interpretation lies: It is too vague on what quantum theory is about, which properties the members of the ensembles have or do not have. It is not adequate to talk about quantum "objects" with conflicting properties, or properties that do not exist at all times.
The problem is all this philosophical ballast put on QT by Bohr et al. Too much philosophy hides the physics. According to quantum theory the properties of an are described by the quantum state, represented by the statistical operator. There is nothing conflicting here. It uniquely tells you the probabities to find one of the possible values for any observable when you measure them. Observables take determined values if and only if the system is prepared in a corresponding state, for which with 100% probability these observables take one of their possible values. The formalism also implies that generally it is impossible to prepare the system in a state where all observable take determined values.
 
  • #291
vanhees71 said:
According to quantum theory the properties of an are described by the quantum state, represented by the statistical operator. There is nothing conflicting here.
The formalism is perfect. But I do wonder what properties you were referring to ("properties of an ..."?). Saying that quantum theory is about observables sounds empty to me. Almost like "Classical Mechanics is about differential equations."
 
  • #292
Classical mechanics is about observables too of course. As the name suggests: The state describes the system's properties unambigously in both classical and quantum mechanics. Only the physical meaning of the (pure) states differs drastically.

In classical mechanics a pure state is a point in phase space. Specifying the point in phase space exactly at time ##t_0## implies that you know the exact point in phase space at any later time, and this implies that you know the precise values of all possible observables at any time ##t>t_0## (assuming you can exactly solve the equations of motion).

In quantum mechanics a pure state is represented by a statistical operator ##\hat{\rho}##, that is a projection operator ##\hat{\rho}^2=\hat{\rho}##, which means that there's a normalized state vector ##|\psi \rangle## such that ##\hat{\rho}=|\psi \langle \langle \psi|##. You can consider it determined by a filter measurement of a complete set of compatible observables. This is the most complete preparation possible for the quantum system according to standard quantum theory, but all it implies concerning any observable is the probability for the outcome of a precise measurement, given by Born's rule, i.e., if you measure an observable ##O## the probability to find one of the possible values ##o## (the ##o## is in the spectrum of the self-adjoint operator ##\hat{O}## representing ##O##) and ##|o,\alpha \rangle## is a complete orthonormal set of the eigenspace ##\mathrm{Eig}(\hat{O},o)##, then
$$P(o)=\sum_{\alpha} \langle o,\alpha|\hat{\rho}|o,\alpha \rangle.$$
This is the only meaning of the formalism: It predicts probabilities for the outcome of measurements of any observable of the system given the preparation of the system, even if the preparation is as complete as it can be according to QT.

Again the dynamical state evolution is deterministic, i.e., given ##\hat{\rho}(t_0)## the state is defined at any later time ##t## by solving the von Neumann equation (or for pure states the Schrödinger equation with the given Hamiltonian) (that's describing the Schrödinger picture to keep the formulation simple; the same holds of course in any other picture of time evolution, but there the eigenstates evolve with time too; the physical results, i.e., the probabilities are independent of the choice of the picture).
 
  • #293
Thanks, I'm familiar with all this.
vanhees71 said:
According to quantum theory the properties of an [?] are described by the quantum state, represented by the statistical operator.
You probably meant to write properties of a system. John Bell argued that a word like "system" (just like "apparatus" or "measurement") should have no place in a rigorous formulation of quantum theory ("Against Measurement").
 
  • #294
vanhees71 said:
The problem is all this philosophical ballast put on QT by Bohr et al. Too much philosophy hides the physics.
It seems that others didn't think like that. For example, John Bell in „BERTLMANN’S SOCKS AND THE NATURE OF REALITY“ (Journal de Physique Colloques, 1981, 42 (C2), pp.C2-41-C2-62):

Fourthly and finally, it may be that Bohr's intuition was right - in that there is no reality below some 'classical' 'macroscopic' level. Then fundamental physical theory would remain fundamentally vague, until concepts like 'macroscopic' could be made sharper than they are today.
 
  • #295
Fra said:
By a similar argument one could argue that the detailed hamiltonian for such system + macroscopic detector is in principle not inferrable by an observer?
physicsworks said:
Why? An observer records particular values of collective coordinates associated with the macroscopic detector. As long as this detector, or any other macroscopic object like a piece of paper on which we wrote these values, continues to exist in a sense that it doesn't explode into elementary particles, we can, with fantastic accuracy, use Bayes' rule of conditioning (on those particular values recorded) to predict probabilities of future observations. If those macroscopic objects which recorded our observations by means of collective coordinates cease to exist in the above mentioned sense, then we must go back and use the previous probability distribution before such conditioning was done.
Because a real observer does not always have enough capacity for information processing, to resolve and make the inference of the detailed unitary evolution, before the system is changing or the observer is forced to interact. It is possible only for the case where the quantum system is a small subsystem and the observer is dominant (and classical). This is why the laws of physics appear timeless only for small subsystems, and small timescales. Time evolution can not generally be inferred not be exactly unitary with certainty, even in principle. In a textbook example, the hamiltonian of a black box may be given, but considering a real observer, even the hamiltonian needs to be inferred, not just the initial state. So the observes "information about laws" and it states (that the laws presumably evolve) should somehow be treated more by equal standard.

/Fredrik
 
  • #296
WernerQH said:
Thanks, I'm familiar with all this.

You probably meant to write properties of a system. John Bell argued that a word like "system" (just like "apparatus" or "measurement") should have no place in a rigorous formulation of quantum theory ("Against Measurement").
Ok, then find an alternative word. I don't know, why it is forbidden to use standard language for well-defined things. A system is something we observe of course.
 
  • #297
WernerQH said:
The formalism is perfect. But I do wonder what properties you were referring to ("properties of an ..."?). Saying that quantum theory is about observables sounds empty to me. Almost like "Classical Mechanics is about differential equations."
Put "system" or "object". It's just a typo.
 
  • #298
Lord Jestocost said:
It seems that others didn't think like that. For example, John Bell in „BERTLMANN’S SOCKS AND THE NATURE OF REALITY“ (Journal de Physique Colloques, 1981, 42 (C2), pp.C2-41-C2-62):

Fourthly and finally, it may be that Bohr's intuition was right - in that there is no reality below some 'classical' 'macroscopic' level. Then fundamental physical theory would remain fundamentally vague, until concepts like 'macroscopic' could be made sharper than they are today.
What's vague is not clear to me. QT is the most successful theory we have today. One just has to accept that on a fundamental level the values of observables are indetermined and the probabilistic description provided by quantum states is all there is to "reality". I still don't know what Bell specifically means by "reality". For me it's the objectively observable behavior of Nature.
 
  • Like
Likes AlexCaledin
  • #299
vanhees71 said:
Ok, then find an alternative word. [...]
A system is something we observe of course.
An alternative is "event". The problem is of course not the word, but its connotations, and whether or not they are made explicit. The word "object" is almost as bad as "system". We think of an object as existing for at least some interval of time. Many think of "photons" as traveling from a source to the detector, but it is more appropriate to speak of a pair of emission and absorption events, localized in time. QFT is better viewed as a statistical theory of events (points in spacetime). I see the "state" of an object not as something physical, but as a characterization of the correlations between events.
 
  • Like
Likes AlexCaledin and vanhees71
  • #300
Sure. That holds for the classical em. field too. What we observe are intensities at some time at some place (quantified by the energy density ##1/2(\vec{E}^2+\vec{B}^2)##). Why this is so follows also from (semiclassical) QT: What we observe are, e.g., electrons emitted in the detector medium via the photoelectric effect. Using the dipole approximation you find out that the emission probability at the given location of an atom/molecule of the detector material is indeed proportional to the energy density of the em. field.
 
Back
Top