Undergrad Are there signs that any Quantum Interpretation can be proved or disproved?

  • #271
I'm not familiar with measurements of the ionization energies. Of course, you have to define the choice of the "zero point" of you energies you measure since what you measure are always energy differences.

What is a "q-observable"?

It's of great importance to understand that the Hamiltonian in general is not gauge invariant and not an observable. That's so already in classical physics when using the Hamilton formulation of the action principle for a particle in an external electromagnetic field. The Hamiltonian contains the electromagnetic potentials and thus is not gauge invariant. For a nice explanation of the issue in the quantum context, see

Donald H. Kobe and Arthur L. Smirl, Gauge invariant formulation of the interaction of electromagnetic radiation and matter, Am. Jour. Phys. 46, 624 (1978)
https://doi.org/10.1119/1.11264
 
  • Like
Likes gentzen
Physics news on Phys.org
  • #272
vanhees71 said:
It's of great importance to understand that the Hamiltonian in general is not gauge invariant
Very good, so my impression is that you understood my problem, and I understood your position. Whether or not I used words in a way that seems inappropriate to you is not important ("the Hamilatonian in general is ... not an observable"), because my focus is often less on individual words, but more on the concrete stuff.

vanhees71 said:
What is a "q-observable"?
This is defined at the end of subsection 2.2 Properties in Foundations of quantum physics II. The thermal interpretation as
A subsystem of a system is specified by a choice declaring some of the quantities (q-observables) of the system to be the distinguished quantities of the subsystem. This includes a choice for the Hamiltonian of the subsystem.
If you are not familiar with that paper, looking at equations (6), (7), and (8) in section "2.1 The Ehrenfest picture of quantum mechanics" could be helpful for understanding in which sense I feel that q-expectations share many of the good properties of statistical operators. I once wrote:
The formulation of QM in section "2.1 The Ehrenfest picture of quantum mechanics" via (6), (7), and (8) shows another interesting advantage of using the collection of q-expectation as state instead of the density operator. That presentation unifies the Schrödinger, Heisenberg, and Dirac picture, but the density operator itself is different in each picture. That presentation even unifies classical and quantum mechanics.

However, that unification may be treacherous. It doesn't just include classical mechanics, but also classical mechanics with epistemic uncertainty about the state of the system. But in classical mechanics, there is a clear way to distinguish a state with epistemic uncertainty from a state without. In quantum mechanics, people tried resorting to pure states to achieve this distinction. But the thermal interpretation explicitly denies pure states this privilege, and explains well why it is important to deny pure states any special status.

vanhees71 said:
For a nice explanation of the issue in the quantum context, see

Donald H. Kobe and Arthur L. Smirl, Gauge invariant formulation ...
Thanks for the reference. I will have a look. Maybe it will indeed improve my understanding of "gauge freedom" and its impact on results from concrete computations.
 
  • #273
I don't understand A. Neumairs interpretation, because it takes the only interpretation which makes contact to real-world experiments away (the probabilistic interpretation of the state, described by the statistical operator) and doesn't provide a new reinterpretation for the Born rule, calling just the usual expectation value, ##\langle O \rangle =\mathrm{Tr}(\hat{\rho} \hat{O})## (expectation value in the usual meaning defined by probability theory) "q-expectation value". If there is no probabilistic interpretation allowed, it's not clear how to relate this mathematical formal object to real-world objects dealt with in experiments.

All this has nothing to do with the concrete question of gauge invariance. I think the cited paper by D. H. Kobe (and the many great references therein, particularly the ones by Yang) is bang on your problem.

It's a great exercise to think about the motion of a charged particle in a homogeneous magnetic field leading to the famous Landau levels and formulate it in a gauge invariant way (the energy eigenvalue problem can be completely solved for the gauge-invariant observable probabilities).

Another very good source is also the textbook by Cohen-Tanoudji, Quantum Mechanics, Vol I, Complement H III.
 
Last edited:
  • #274
vanhees71 said:
So what's literally measured is the position of the Ag atoms when hitting this screen.
vanhees71 said:
If there is no probabilistic interpretation allowed, it's not clear how to relate this mathematical formal object to real-world objects dealt with in experiments.
The thermal interpretation gives a deterministic interpretation for q-expectations of macroscopic quantities, which takes account of all measured currents, pointer readings, dots on a screen, counters, etc.. This covers everything that is literally measured, in the sense of the first quote. This relates the quantum formalism to a large class of real-world objects dealt with in experiments.

In addition, there is a derivation of the probabilistic interpretation for q-expectations of microscopic quantities (namely as statistical expectation values), so this has not to be postulated (but neither is it forbidden).

Thus everything of experimental interest is covered. I don't understand why you object.
 
  • #275
How then do you explain the observed fact that the observed position of the Ag atom is hitting the screen is random (provided the initial Ag atoms are not prepared in eigenstates of the spin component under investigation)? The single atom doesn't land around the expectation value (in the usual probabilistic sense, because I don't understand yet the instrumental meaning of your q-expectation values) with some uncertainty (again in the usual probabilistic sense of a standard deviation) but around two spots. The demonstration of this "direction quantization" was the great achivement of the experiment!

I don't object, I only need an instrumental understanding. If you now say you can derive the usual probabilistic interpretation and accept it as the instrumental understanding of the formalism, I don't know, why you all the time negate the validity of the standard probabilistic view. Understood in this way your thermal interpretation is just using another set of postulates to get back the same quantum theory we have since 1926. If this is now an understanding acceptable for you, the only point I have to understand then is why you insist on the collapse of the state as something outside the formalism but necessary for its interpretation.
 
  • #276
vanhees71 said:
How then do you explain the observed fact that the observed position of the Ag atom is hitting the screen is random (provided the initial Ag atoms are not prepared in eigenstates of the spin component under investigation)? The single atom doesn't land around the expectation value (in the usual probabilistic sense
This quantum is explained in the same way as the observed classical fact that the observed values when casting a die are integral although the expectation values are not. But the expectation values of the powers (the moments) allow one to reconstruct the complete probability distribution, and this reveals that the individual values cast are just 1,...,6.

vanhees71 said:
If you now say you can derive the usual probabilistic interpretation and accept it as the instrumental understanding of the formalism, I don't know, why you all the time negate the validity of the standard probabilistic view.

I was never negating the validity of the standard probabilistic view. I just removed it from the foundations! I accept the usual probabilistic interpretation not as the instrumental understanding of the formalism in general but only as the instrumental understanding in cases where one actually measures a whole probability distribution!
vanhees71 said:
If this is now an understanding acceptable for you,
Not yet, because you want to have it as foundation, whereas I want to have it as consequence of more natural (and more complete) foundations.
vanhees71 said:
the only point I have to understand then is why you insist on the collapse of the state as something outside the formalism but necessary for its interpretation.
The collapse is not an assumption in the thermal interpretation, it is just a frequently (but as you correctly remark, not always) observed fact. I insist on its presence only because the collapse cannot be derived from your minimal statistical interpretation but is needed in practice, and hence shows that the minimal statistical interpretation is incomplete, hence too minimal.
 
  • #277
I don't care what you take as the postulates. As long as you end up with a connection between the formalism to the observations that successfully describe the observations. Of course if you have all moments of a distribution, you have the distribution. The important point of the instrumental meaning just is that it's a probability distribution. So it seems to be settled that I can read your q-expectation values simply in terms of the standard interpretation of probabilities defined in the usual way as QT does since 1926.

Of course can the collapse not be derived, because it's by assumption outside the formal description. It has no foundation whatsoever. In ther case of relativistic QFT it's even contradicting its very foundations resting on locality/microcausality.

I don't know, why you insist on its necessity, because I don't see, where you need it. I also don't see, why the minimal statistical interpretation should be incomplete and your interpretation be complete. I thought at the end it's simply equivalent (as soon as I'm allowed to give your mathematical operations, particularly the Born rule for calculating your q-expectation vlaues the standard porbabilistic meaning as expectation values).
 
  • #278
vanhees71 said:
The important point of the instrumental meaning just is that it's a probability distribution.
Only when you have many copies of the same microscopic system.

But for a single ion in an ion trap, probabilities are instrumentally meaningless since to measure probabilities you need many copies!
 
  • #279
vanhees71 said:
I also don't see, why the minimal statistical interpretation should be incomplete and your interpretation as complete.
vanhees71 said:
Of course can the collapse not derived, because it's by assumption outside the formal description.
Well if the minimal statistical interpretation were complete, the formalism together with this interpretation should allow the derivation of the collapse, or in greater generality, should allow one to predict from the microscopic description of a filter how individual input states are transformed into individual output states. This is the measurement problem.
 
  • #280
Again: The achievement of the 2012 Nobelists is that they can use one atom/photon. That doesn't mean that there is not the usual meaning of probability concerning the observables on this one atom/photon. They just repeat the same measurement with the one individual atom/photon. I can use one and the same dice and throw it repeatedly to get the probability for the outcomes. I don't need to use different dices (indeed for macroscopic objects these are strictly speaking two different random experiments, because the dices never are exactly identical, while sufficiently simple "quantum objects" are).

Since the predictions of QT are probabilistic you have to do that to be able to gain "enough statistics" to compare your probabilistic predictions with the statistics of the measurement outcomes.
 
  • #281
vanhees71 said:
That doesn't mean that there is not the usual meaning of probability concerning the observables on this one atom/photon.
They interpret the results for one single atom. The only statistics they make is about the time series produced by this atom, and they draw conclusions for this atom.
vanhees71 said:
I can use one and the same dice and throw it repeatedly to get the probability for the outcomes.
But only if you cast the die in a random way, so that the eyes are independently distributed. But this is not the case in a continuous quantum measurement. The latter means that you measure the die many times while it falls, and stop the experiment after the die is at rest. Or that after you cast the first die you lift it carefully and put it down again to get the next value for the eyes. In both cases the probability for the outcome becomes meaningless.
vanhees71 said:
Since the predictions of QT are probabilistic you have to do that to be able to gain "enough statistics" to compare your probabilistic predictions with the statistics of the measurement outcomes.
Some predictions are probabilistic, some are not. If you have a single atom in a trap then the raw observations are noisy but not independent realizations of the same quantity; thus the minimal interpretation does not say anything meaningful. Also, the observations depend on the controls applied to the atom - just as when maniulating a die by hand during the measurements.

The observations are therefore not given by Born's rule but by the rules for an externally controlled quantum stochastic process. To be able to do this in a correctly predicted way was worth a Nobel price.
 
  • #282
Sure, but they use the "time series" (as you call it) to gain statistics. I don't see, how this contradicts the very general foundations of QT as formulated by our standard postulates only because I use a single atom to perform the same measurement repeatedly. Since atoms are exactly identical it doesn't make any difference whether I repeat the same experiment on the one individual atom or on several different identical atoms (all this of course stated within standard QT ;-)).
 
  • #283
vanhees71 said:
Sure, but they use the "time series" (as you call it) to gain statistics. I don't see, how this contradicts the very general foundations of QT as formulated by our standard postulates only because I use a single atom to perform the same measurement repeatedly.
Because in order that Born's rule is instrumentally meaningful you need identically prepared systems. Even for classical statistics, you need many instances to get meaningful frequentist (hence scientifically well-posed) probabilities.
vanhees71 said:
Since atoms are exactly identical it doesn't make any difference whether I repeat the same experiment on the one individual atom or on several different identical atoms.
To prepare a quantum system you always need to make it distinguishable from other, identical systems that are not prepared. Indeed, the atom in an ion trap is not indistinguishable, but is distinguished by the property of being in the trap, which has only place for one atom.

The point is that at each time measured, the ion is in a (at least potentially) different state, so a notion of 'identically prepared' cannot be applied. Though all atoms with the same number of protons, neutrons, and electrons identical in the sense of Bose statistics, the atoms are not identically prepared! The one you single out in an ion trap is very differently prepared from one in an ideal gas.
 
  • Like
Likes gentzen, mattt, dextercioby and 1 other person
  • #284
But in this experiment there are many identically prepared systems using one and the same molecules in a trap. I don't see, why an ensemble shouldn't be realized with one and the same system. That has nothing to do with quantum mechanics. You have the same in classical statistics. If you take a gas in a container it also consists of the same molecules all the time and the thermodynamical quantities (temperature, density, pressure,...) are understood as averages over many collisions of these same molecules.

I also don't understand why you say the ion is not prepared. To the contrary it's pretty sharply prepared being trapped in the trap. The laser exciting it is of course also part of the preparation procedure. Of course the single ion in the trap is not prepared as a thermal state here. I never claimed this.
 
  • #285
vanhees71 said:
That has nothing to do with quantum mechanics.
Exactly. And the irony is that A. Neumaier never denied this. One of his points is that the thermal interpretation solves this issue both for classical thermodynamics and for quantum mechanics. And his attempts to bring this point across to you is what helped me to finally get it. As I wrote: "Your current discussions with A. Neumaier helped me to see more clearly how the thermal interpretations resolves paradoxes I had thought about long before I heard about the thermal interpretation."
 
  • #286
Maybe then you can explain to me what the physical content of this interpretation is. I still don't get it from Neumaier's explanations, which are partially self-contradictory: One time he abandons the standard probabilstic meaning of his "q-expectation values" and then I'm lost, because then there's no physical meaning of the formalism left. Then he tells me again that it's still the same probabilistic meaning as in standard minimally interpreted QT, but then I don't see where's the difference between his and the standard QT interpretation.

Then there is the issue with doing experiments with a single ion in a trap. Neumaier seems to believe these cannot be describes within the standard minimal interpretation, but that's not right, because many people in this community of physicists work well with the standard QT and indeed what's measured here are probabilities or expectation values over many realizations of the experiment. That you use one and the same ion to realize these ensembles is no issue at all.
 
  • Like
Likes WernerQH
  • #287
vanhees71 said:
Then he tells me again that it's still the same probabilistic meaning as in standard minimally interpreted QT, but then I don't see where's the difference between his and the standard QT interpretation.
For the cases where the standard minimally interpreted QT applies, there is no significant difference between his and the standard QT interpretation.

vanhees71 said:
One time he abandons the standard probabilstic meaning of his "q-expectation values"
The name "q-expectation value" is reserved for the value computed by the model from the specific formula. The interpretation of those values is done separately. One reason for this is that not all values that the model can compute by such formulas will have a direct operational meaning in the real world.

vanhees71 said:
Maybe then you can explain to me what the physical content of this interpretation is.
Well, I wrote an explanation, but have now copied it away for the moment. I am not sure whether A. Neumaier would be happy if I tried, because he has written nicely polished articles and a nicely polished book where he explains it. Any explanation in a few words is bound to misrepresent his views, and additionally I should only speak for myself.

Let me instead remark how I see its relation to QBism: There you have talk about "agents", but what is an agent, and why should you care? In the thermal interpretation, there are no agents, but models are taken seriously on their own. In QBism, the agent uses QM as a cookbook to update his state. A model on the other hand naturally has a state, and doesn't need a cookbook to update it, the consistent evaluation of the state at arbitrary places in space and time is exactly what a model is all about.

But how do engineers and scientists use models to make predictions about the real world? Good question! Try to closely watch what they actually do, and try to not be mislead by their words about what they believe that they do!
 
  • #288
Ok, then I have to give up. I also don't understand Qbism as a physical interpretation of QT. I also don't see, where standard minimally interpreted QT should not apply (except for the unsolved problem of "quantum gravity", but that's not under debate here). If I watch what experimentalists do when they use QT, it's always within the standard probabilistic meaning of the quantum state.
 
  • #289
Isn't it odd that a theory that is almost 100 years old triggers such debates between two people who know it extremely well? It seems to disprove the idea that there is "no problem at all". The meaning of probability is being discussed to this day. In my opinion probability theory, just like geometry, is an indispensable ingredient of modern physical theories.

Is it necessary to emphasize that an ensemble has properties different from those of its members? An ensemble (average) can evolve smoothly and deterministically, but this need not be true for its members.

The purpose of an ensemble is to permit the statistical description of its members. And it is here where (I think) the deficiency of the statistical interpretation lies: It is too vague on what quantum theory is about, which properties the members of the ensembles have or do not have. It is not adequate to talk about quantum "objects" with conflicting properties, or properties that do not exist at all times.

An ensemble need not be physical. It doesn't need to have as many members as there are molecules in a volume of gas. As Willard Gibbs has shown, it is sufficient for our calculations that we can imagine it.
 
  • Like
Likes gentzen and vanhees71
  • #290
WernerQH said:
The purpose of an ensemble is to permit the statistical description of its members. And it is here where (I think) the deficiency of the statistical interpretation lies: It is too vague on what quantum theory is about, which properties the members of the ensembles have or do not have. It is not adequate to talk about quantum "objects" with conflicting properties, or properties that do not exist at all times.
The problem is all this philosophical ballast put on QT by Bohr et al. Too much philosophy hides the physics. According to quantum theory the properties of an are described by the quantum state, represented by the statistical operator. There is nothing conflicting here. It uniquely tells you the probabities to find one of the possible values for any observable when you measure them. Observables take determined values if and only if the system is prepared in a corresponding state, for which with 100% probability these observables take one of their possible values. The formalism also implies that generally it is impossible to prepare the system in a state where all observable take determined values.
 
  • #291
vanhees71 said:
According to quantum theory the properties of an are described by the quantum state, represented by the statistical operator. There is nothing conflicting here.
The formalism is perfect. But I do wonder what properties you were referring to ("properties of an ..."?). Saying that quantum theory is about observables sounds empty to me. Almost like "Classical Mechanics is about differential equations."
 
  • #292
Classical mechanics is about observables too of course. As the name suggests: The state describes the system's properties unambigously in both classical and quantum mechanics. Only the physical meaning of the (pure) states differs drastically.

In classical mechanics a pure state is a point in phase space. Specifying the point in phase space exactly at time ##t_0## implies that you know the exact point in phase space at any later time, and this implies that you know the precise values of all possible observables at any time ##t>t_0## (assuming you can exactly solve the equations of motion).

In quantum mechanics a pure state is represented by a statistical operator ##\hat{\rho}##, that is a projection operator ##\hat{\rho}^2=\hat{\rho}##, which means that there's a normalized state vector ##|\psi \rangle## such that ##\hat{\rho}=|\psi \langle \langle \psi|##. You can consider it determined by a filter measurement of a complete set of compatible observables. This is the most complete preparation possible for the quantum system according to standard quantum theory, but all it implies concerning any observable is the probability for the outcome of a precise measurement, given by Born's rule, i.e., if you measure an observable ##O## the probability to find one of the possible values ##o## (the ##o## is in the spectrum of the self-adjoint operator ##\hat{O}## representing ##O##) and ##|o,\alpha \rangle## is a complete orthonormal set of the eigenspace ##\mathrm{Eig}(\hat{O},o)##, then
$$P(o)=\sum_{\alpha} \langle o,\alpha|\hat{\rho}|o,\alpha \rangle.$$
This is the only meaning of the formalism: It predicts probabilities for the outcome of measurements of any observable of the system given the preparation of the system, even if the preparation is as complete as it can be according to QT.

Again the dynamical state evolution is deterministic, i.e., given ##\hat{\rho}(t_0)## the state is defined at any later time ##t## by solving the von Neumann equation (or for pure states the Schrödinger equation with the given Hamiltonian) (that's describing the Schrödinger picture to keep the formulation simple; the same holds of course in any other picture of time evolution, but there the eigenstates evolve with time too; the physical results, i.e., the probabilities are independent of the choice of the picture).
 
  • #293
Thanks, I'm familiar with all this.
vanhees71 said:
According to quantum theory the properties of an [?] are described by the quantum state, represented by the statistical operator.
You probably meant to write properties of a system. John Bell argued that a word like "system" (just like "apparatus" or "measurement") should have no place in a rigorous formulation of quantum theory ("Against Measurement").
 
  • #294
vanhees71 said:
The problem is all this philosophical ballast put on QT by Bohr et al. Too much philosophy hides the physics.
It seems that others didn't think like that. For example, John Bell in „BERTLMANN’S SOCKS AND THE NATURE OF REALITY“ (Journal de Physique Colloques, 1981, 42 (C2), pp.C2-41-C2-62):

Fourthly and finally, it may be that Bohr's intuition was right - in that there is no reality below some 'classical' 'macroscopic' level. Then fundamental physical theory would remain fundamentally vague, until concepts like 'macroscopic' could be made sharper than they are today.
 
  • #295
Fra said:
By a similar argument one could argue that the detailed hamiltonian for such system + macroscopic detector is in principle not inferrable by an observer?
physicsworks said:
Why? An observer records particular values of collective coordinates associated with the macroscopic detector. As long as this detector, or any other macroscopic object like a piece of paper on which we wrote these values, continues to exist in a sense that it doesn't explode into elementary particles, we can, with fantastic accuracy, use Bayes' rule of conditioning (on those particular values recorded) to predict probabilities of future observations. If those macroscopic objects which recorded our observations by means of collective coordinates cease to exist in the above mentioned sense, then we must go back and use the previous probability distribution before such conditioning was done.
Because a real observer does not always have enough capacity for information processing, to resolve and make the inference of the detailed unitary evolution, before the system is changing or the observer is forced to interact. It is possible only for the case where the quantum system is a small subsystem and the observer is dominant (and classical). This is why the laws of physics appear timeless only for small subsystems, and small timescales. Time evolution can not generally be inferred not be exactly unitary with certainty, even in principle. In a textbook example, the hamiltonian of a black box may be given, but considering a real observer, even the hamiltonian needs to be inferred, not just the initial state. So the observes "information about laws" and it states (that the laws presumably evolve) should somehow be treated more by equal standard.

/Fredrik
 
  • #296
WernerQH said:
Thanks, I'm familiar with all this.

You probably meant to write properties of a system. John Bell argued that a word like "system" (just like "apparatus" or "measurement") should have no place in a rigorous formulation of quantum theory ("Against Measurement").
Ok, then find an alternative word. I don't know, why it is forbidden to use standard language for well-defined things. A system is something we observe of course.
 
  • #297
WernerQH said:
The formalism is perfect. But I do wonder what properties you were referring to ("properties of an ..."?). Saying that quantum theory is about observables sounds empty to me. Almost like "Classical Mechanics is about differential equations."
Put "system" or "object". It's just a typo.
 
  • #298
Lord Jestocost said:
It seems that others didn't think like that. For example, John Bell in „BERTLMANN’S SOCKS AND THE NATURE OF REALITY“ (Journal de Physique Colloques, 1981, 42 (C2), pp.C2-41-C2-62):

Fourthly and finally, it may be that Bohr's intuition was right - in that there is no reality below some 'classical' 'macroscopic' level. Then fundamental physical theory would remain fundamentally vague, until concepts like 'macroscopic' could be made sharper than they are today.
What's vague is not clear to me. QT is the most successful theory we have today. One just has to accept that on a fundamental level the values of observables are indetermined and the probabilistic description provided by quantum states is all there is to "reality". I still don't know what Bell specifically means by "reality". For me it's the objectively observable behavior of Nature.
 
  • Like
Likes AlexCaledin
  • #299
vanhees71 said:
Ok, then find an alternative word. [...]
A system is something we observe of course.
An alternative is "event". The problem is of course not the word, but its connotations, and whether or not they are made explicit. The word "object" is almost as bad as "system". We think of an object as existing for at least some interval of time. Many think of "photons" as traveling from a source to the detector, but it is more appropriate to speak of a pair of emission and absorption events, localized in time. QFT is better viewed as a statistical theory of events (points in spacetime). I see the "state" of an object not as something physical, but as a characterization of the correlations between events.
 
  • Like
Likes AlexCaledin and vanhees71
  • #300
Sure. That holds for the classical em. field too. What we observe are intensities at some time at some place (quantified by the energy density ##1/2(\vec{E}^2+\vec{B}^2)##). Why this is so follows also from (semiclassical) QT: What we observe are, e.g., electrons emitted in the detector medium via the photoelectric effect. Using the dipole approximation you find out that the emission probability at the given location of an atom/molecule of the detector material is indeed proportional to the energy density of the em. field.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 292 ·
10
Replies
292
Views
11K
  • · Replies 45 ·
2
Replies
45
Views
7K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 376 ·
13
Replies
376
Views
21K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 25 ·
Replies
25
Views
5K
  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 15 ·
Replies
15
Views
3K
  • · Replies 14 ·
Replies
14
Views
3K