I Are there signs that any Quantum Interpretation can be proved or disproved?

  • #201
It is a priori clear that you cannot understand the classical behavior of the measurement device using only unitary time evolution. Thus, what you aim at is well-known to be impossible to be achieved.

You cannot even in classical mechanics understand the thermodynamics of an ideal gas from solving the full microscopic equation of motion (Liouville equation).

Your points 1.-3. are indeed the true goal, and I think this has been achieved by all the work on "decoherence" in the past 2-3 decades.
 
Physics news on Phys.org
  • #202
vanhees71 said:
Your points 1.-3. are indeed the true goal, and I think this has been achieved by all the work on "decoherence" in the past 2-3 decades.
No. Decoherence does not contribute to the unique outcome problem - it fails on point 3 of my list of requirements. Schlosshauer, the most competent physicist on decoherence, is very explicit about this, and I agree with him.
 
  • Like
Likes Lord Jestocost
  • #203
Ok, then what's wrong with the model treatments for position mesurements in

Joos et al, Decoherence and the Appearance of a Classical World in Quantum Theory ?
 
  • #204
So, even though all 'stuff' is quantum mechanical in nature and there is nothing in the evolution of the quantum system that mandates to bring about single outcomes, there must be something non-quantum that destroys superpositions and renders them into definite outcomes.
Some assume that something to be the very thing that we are trying to explain(some other 'classical' stuff, macroscopic detectors, detector plates, etc). But this is circular reasoning.

If we rehash the 3 points:

1. Everything we have ever come across turns out to be quantum in nature.
2. Nothing in the formalism dictates that quantum systems must take on definite values under certain circumstances. Yet they do.
3. Whenever we act to observe, explore, see, know, inquire in any way about it, the quantum system acts as if it were in fact classical.
4. Based on point 2 and 3, we can draw the conclusion that quantumness cannot be destroyed and formatted into single outcomes on its own by other quantum systems.

So, the only conclusion left is that whatever takes place during measurement/observation is not due to anything quantum in nature. And since obviously everything is in fact quantum in nature, the next obvious conclusion is that the reason for the definite outcomes is not and cannot be quantum in nature. I.e. cannot be something from our familiar, observable and measurable environment.
There's very few options left to pursue from this point on. So a dedicated scientist might as well say the goal of science is to see what we can say about our observations(nature) and not to explain what and esp. why it happens.
I.e. this question cannot be solved in scientific terms. We are unable to explain our observations from within the realm of our observations because essentially it takes more than what we can identify within the realm of the observable for the same observable to work and even exist and be measurable.
 
Last edited:
  • Like
Likes Lord Jestocost
  • #205
vanhees71 said:
Ok, then what's wrong with the model treatments for position mesurements in

Joos et al, Decoherence and the Appearance of a Classical World in Quantum Theory ?
Decoherence produces decay of phase information but does not explain how a single measurement results. It produces an improper diagonal density matrix instead of one where position has a fixed value. The improper mixture is not resolved into single results.
 
  • Like
Likes Lord Jestocost and vanhees71
  • #206
Ok, then the goal you have in mind is not to reach within standard quantum mechanics, i.e., one has to extent quantum mechanics, i.e., something like the GRW theory with "explicit collapse". I always considered it sufficient to prove precisely that you get what you call an "improper diagonal density matrix", i.e., the probability distribution for the position (taken as a "pointer observable"). Since this is anyway all we can expect to measure and to predict with our theories, for me that's sufficient.
 
  • Like
Likes AlexCaledin
  • #207
vanhees71 said:
then the goal you have in mind is not to reach within standard quantum mechanics

No; since nobody proved the impossibility of goal 3, it is still an open problem,.

It is only clear that this goal cannot be reached within the minimal statistical interpretation alone, since the only contact of the latter to reality is Born's rule, which presupposes unique outcomes for large quantum devices in nonthermal states.

vanhees71 said:
I always considered it sufficient to prove precisely that you get what you call an "improper diagonal density matrix", i.e., the probability distribution for the position (taken as a "pointer observable"). Since this is anyway all we can expect to measure and to predict with our theories, for me that's sufficient.
This amounts to taking the existence of unique macroscopic outcomes as an additional axiom in addition to the statistical interpretation.
 
Last edited:
  • Like
Likes vanhees71
  • #208
vanhees71 said:
Ok, then the goal you have in mind is not to reach within standard quantum mechanics, i.e., one has to extent quantum mechanics, i.e., something like the GRW theory with "explicit collapse".
Based on what the proponents of consistent histories wrote about related matters, my guess is that one cannot disprove Many-Worlds (within standard quantum mechanics), but that one cannot prove it either, nor can one prove the need for something like GRW.

Here is an extract from a quote from “Quantum Philosophy” by Roland Omnès:
And R. Omnès is also pretty clear that he is convinced that there is only a single world, even if the “consistent histories” doesn’t explain it, and cannot even disprove Many-Worlds. His defense is to admit that there is still a disagreement left between Reality and quantum theory, but that it would be hubris to expect otherwise. At least that is how I interpret the words on page 214:

… have reproached quantum physics for not explaining the existence of a unique state of events. It is true that quantum theory does not offer any mechanism or suggestion in that respect. This is, they say, the indelible sign of a flaw in the theory, … Those critics wish at all costs to see the universe conform to a mathematical law, down to the minutest details, and they certainly have reason to be frustrated.

I embrace, almost with prostration, the opposite thesis, the one proclaiming how marvelous, how wonderful it is to see the efforts of human beings to understand reality produce a theory fitting it so closely that they only disagree at ...

What I guess is possible for A. Neumaier to achieve is to prove that the existence of a single world with a single quantum state is consistent both with standard quantum mechanics and with the fact that there are single well-defined outcomes when measuring some observable.
 
  • Like
Likes vanhees71
  • #209
EPR said:
even though all 'stuff' is quantum mechanical in nature and there is nothing in the evolution of the quantum system that mandates to bring about single outcomes, there must be something non-quantum that destroys superpositions and renders them into definite outcomes.
Not if you adopt the many-worlds interpretation.
 
  • #210
PeterDonis said:
Not if you adopt the many-worlds interpretation.
Yes. The variant of MWI where wavefunctions are somehow real, yet they extend throughout spacetime.
 
  • #211
EPR said:
The variant of MWI where wavefunctions are somehow real, yet they extend throughout spacetime.
I'm not sure what "variant" of the MWI you are talking about. The MWI says that superpositions are never destroyed and all outcomes happen; everything is unitary evolution, all the time, everywhere. That removes the need to explain how superpositions are destroyed and single outcomes happen, since in the MWI those things don't happen.
 
  • #212
A. Neumaier said:
Now detectors are large quantum systems.
Is that indeed a fact?
 
  • #213
Interested_observer said:
Is that indeed a fact?
That all macroscopic matter is described by quantum mechanics seems to be an undisputed fact. I have never seen anyone seriously questioning that after 1930.
 
  • #214
A. Neumaier said:
In view of the above, the tasks are:
  1. to produce a coarse-grained dynamics for spin+detector from the unitary evolution of a bigger system,
  2. to identify in the resulting model for the open system spin+detector macroscopic observables describing the pointer,
  3. to prove that the coarse-grained dynamics leads to a unique pointer result, a result depending upon the initial spin state in the way predicted by Born's rule.
In my view the tasks are:
1. DEduce THE dynamics for spin+detector.
2. Show that those dynamics hold for any and all measurements.

A. Neumaier said:
That all macroscopic matter is described by quantum mechanics seems to be an undisputed fact. I have never seen anyone seriously questioning that after 1930.
I agree that it "seems to be" an undisputed fact. Could this be in the realm of "appearances can be deceiving"?

BTW, I am NOT an MWI adherent.
 
Last edited:
  • #215
Interested_observer said:
I agree that it "seems to be" an undisputed fact. Could this be in the realm of "appearances can be deceiving"?
I f you want to question it you need very good reasons go be taken seriously.
 
  • #216
A. Neumaier said:
I f you want to question it you need very good reasons go be taken seriously.
I can't be taken seriously, for I know nothing of higher mathematics. But I do have two reasons for questioning:
1. The measurement problem;
2. Gravity
 
  • #217
A. Neumaier said:
No; since nobody proved the impossibility of goal 3, it is still an open problem,.

It is only clear that this goal cannot be reached within the minimal statistical interpretation alone, since the only contact of the latter to reality is Born's rule, which presupposes unique outcomes for large quantum devices in nonthermal states.This amounts to taking the existence of unique macroscopic outcomes as an additional axiom in addition to the statistical interpretation.
Sure, if the phenomenology would be different than observed we'd have to adapt our theories. QT is a theory to predict probabilities for the outcome of measurements given the "experimental setup" (aka the "preparation of the system").
 
  • Like
Likes AlexCaledin
  • #218
vanhees71 said:
QT is a theory to predict probabilities for the outcome of measurements given the "experimental setup" (aka the "preparation of the system").
Quantum theory is much more than a probability theory. It predicts lots of nonprobabilistic stuff, such as spectra of all sorts, mechanical, optical and electrical properties of materials, stability of chemical compounds.
 
Last edited:
  • Like
Likes AlexCaledin
  • #219
Of course it does, but it derives it within a probabilistic theory.
 
  • Like
Likes AlexCaledin and physicsworks
  • #220
vanhees71 said:
Of course it does, but it derives it within a probabilistic theory.
No. The nonprobabilistic predictions of quantum theory are derived from the deterministic evolution of the state, without any direct reference to probability.

Only quantum expectations are needed, not their interpretation in terms of probability theory.
 
  • #221
The state (statistical operator) provides the probabilities for the outcome of measurements. The dynamical equations are of course deterministic, but that doesn't mean that the meaning of the state is deterministic.

The main obstacle for an understanding what is your new interpretation saying is the last sentence. What does "quantum expectations" mean, if it has not the usual probabilistic meaning? It is not enough to write down an abstract mathematical construct. You have to give an operational definition.

In standard quantum theory the expectation value is probabilistic. It depends of course on the system and measurement devices, how they are measured. Roughly there are two types, applying typically to "microscopic" and "macroscopic" observables:

(a) preparing a lot of systems in an equal way (described by a statistical operator) and then measuring the quantity on each of these preparations and taking the average. A statistical analysis let's you probe the predictions of the theory. That's typical for measurements on microscopic systems. An example are the scattering experiments at a particle accelerator: E.g., at the LHC very many pp collisions with a given collision energy are "prepared" and for each collision the outcomes of the reaction are "measured", finally leading to cross sections and related quantities to be compared to theory.

(b) You use a measuring device which already measures some "expectation value". The "averaging" is then inherent in the process underlying the measurement. This is typically the case when you measure macroscopic observables, which are by themselves coarse grained collective observables. E.g., describing a fluid with hydrodynamics you use coarse-grained observables like particle or mass densities, pressure, temperature, flow field etc. These are all collective quantities defined as averages over many (usually single-particle) observables. Such descriptions tend to lead to classical outcomes (but not always as "macroscopic quantum phenomena" like superfluidity, superconductivity, Bose-Einstein condensates etc. show).

Probability theory as applied in standard physics is a clear concept. If you want to substitute it by something else, it's not sufficient to substitute it by some mathematical formalism or some philosophical erudition (like the q-bists, who never could explain to me what their view on probability means in an operational sense either).
 
  • Like
Likes physicsworks
  • #222
vanhees71 said:
You use a measuring device which already measures some "expectation value". The "averaging" is then inherent in the process underlying the measurement. This is typically the case when you measure macroscopic observables, which are by themselves coarse grained collective observables.
Measured is the quantum expectation of a single collective variable. This produces a single measurement result. No other measurement result is anywhere.

Your average is an average of operators defining the collective, not an average of measurements. Thus its meaning is completely different from a statistical average.
 
  • Like
Likes mattt
  • #223
You always say what it not is (namely an average of a random variable) but never what it is!
 
  • #224
vanhees71 said:
You always say what it not is (namely an average of a random variable) but never what it is!
It is a computable quantity, based on theory, that can be compared with experiment and yields the correct result. This is enough to do highly predictive physics. What more do you want?
 
Last edited:
  • #225
You don't say how to measure your expectation values. You take away from the standard interpretation exactly the link between the mathematical formalism and the operational relation to the real world, which is the probabilistic interpretation, and you don't say what should make this link between the formalism and the observations in Nature in your new interpretation/formalism.
 
  • Like
Likes physicsworks
  • #226
vanhees71 said:
You don't say how to measure your expectation values.
?

I said many times how to measure the quantum expectations of macroscopic variables, namely by the standard experimental techniques of classical thermodynamics, hydrodynamics, elasticity theory, and electrodynamics. This is in full agreement with experiments. These are the only things directly measurable.

Everything else is inferred from macroscopic raw measurements and hence needs theory to tell how they are computed approximately from raw measurements, by simple averaging or (for measurements such as an elementary particle mass) by more complex statistical analysis. For this one follows traditional statistics and probability theory.

Microscopic quantities are measured by linking them stochastically to macroscopic observables (counter clicks, photocurrents, etc.) and averaging them in the same way as one averages other very volatile quantities to get reliable and reproducible values for them.

In particular that a single point on a Stern-Gerlach screen should be a measurement of a single particle spin is an unproved and in my view invalid assumption. The reason is that such measurements are not reproducible, hence they do not have the characteristic property of all scientific experiments. Reproducible is only the probability distribution, which is a collection of quantum expectations. Thus these are measurable, too, in the same way that classical uncertain quantities are measurable.

That you start from different assumptions and arrive at a different view is no argument against my assumptions and my view.
 
Last edited:
  • #227
Ok, then it's nothing new, i.e., you use the usual statistical interpretation but forbid to call it so. That doesn't make sense.

Concerning the SGE: What is established by dynamical considerations alone is that there is an entanglement between position and spin component after the silver atom run through the magnet. The correlation can be made as close to 100% as you like. Thus, if you choose a particle in the one or the other region, where the position probability peaks you have with FAPP 100% certainty a particle polarized in the corresponding spin state. I don't know, what more you need to prove. Admittedly for the complete magnetic field you rely on numerics (or perturbation theory), but I'd not say that this doesn't prove the principle issue that we have a proper preparation of the spin component under consideration.
 
  • Like
Likes physicsworks
  • #228
vanhees71 said:
there is an entanglement between position and spin component
Entanglement is not measurement.
 
  • #229
A. Neumaier said:
Entanglement is not measurement.
Yes, the entanglement is not yet a measurement, but later he wrote:
vanhees71 said:
Thus, if you choose a particle in the one or the other region
So if he blanks part of the beam, then this a non-unitary preparation procedure for the remaining beam. And because it is non-unitary, calling it a measurement can be defended to a certain degree.
 
  • Like
Likes physicsworks and vanhees71
  • #230
gentzen said:
Yes, the entanglement is not yet a measurement, but later he wrote:

So if he blanks part of the beam, then this a non-unitary preparation procedure for the remaining beam. And because it is non-unitary, calling it a measurement can be defended to a certain degree.
It is a measurement only if a measurement result is observed. Measurements without results are not covered by Born's rule.
 
  • Like
Likes dextercioby
  • #231
vanhees71 said:
Ok, then it's nothing new, i.e., you use the usual statistical interpretation but forbid to call it so.
You may call it anything. What is new in the thermal interpretation is that the statistics appears only in cases where it is employed to get expectations from actual measurement results, and not earlier. This is completely analogous to the appearance of statistics in classical physics.
 
Last edited:
  • Like
Likes gentzen
  • #232
What I described is a preparation procedure. To measure the so prepared spin component and check the claim that one has prepared a certain ##\sigma_z## value you need to put another Stern Gerlach magnet in that remaining beam and check that you see only one spot on the screen. Again this cannot be done with a measurement on a single silver atom but only on an ensemble of many equally "treated" (prepared) ones.
 
  • #233
vanhees71 said:
this cannot be done with a measurement on a single silver atom but only on an ensemble of many equally "treated" (prepared) ones.
That's why the single measurement does not measure spin, it measures only silver concentration. And the repeated measurement shows that the measured spin has (in the standard setting) a large uncertainty.
 
  • #234
The measured spin has a very small uncertainty due to the preparation procedure. SG experiments have been really done with the expected results (if I remember right, with slow neutrons rather than silver atoms though).
 
  • #235
vanhees71 said:
The measured spin has a very small uncertainty due to the preparation procedure.
Only because you prepared it in a state where the uncertainty is small. If you don"t switch off one of the regions you get random results in the two spots, and the uncertainty is very large. Thus the uncertainty depends on what was prepared, and it is large when a superposition is prepared, just as the thermal interpretation claims.
 
  • #236
If you consider far-distant regions where the two partial beams considerably overlap of course the spin component is uncertain again. The same holds true in the double-slit experiment: If you look close enough to the slit you have more or less accurate which-way information and no interference pattern while at far enough places you have very little which-way information but an interference pattern. That's all well understood as long as you admit the standard probabilistic interpretation of the state.

The point is, if you forbid the probabilistic interpretation and substitute it with expecation values you have to give an operational interpretation of how to measure these expectation values to make your mathematical scheme a physical theory.
 
  • Like
Likes physicsworks
  • #237
vanhees71 said:
The point is, if you forbid the probabilistic interpretation and substitute it with expecation values you have to give an operational interpretation of how to measure these expectation values to make your mathematical scheme a physical theory.
My point is that you demand this without any solid argument for its necessity. First practice yourself what you preach! In QFT you use quantum expectations of nonhermitian field operators without giving them an operational interpretation!

Thus not everything in a physical theory needs to be interpreted operationally - only those things that are compared with experiment. And I did this, many times.
 
  • #238
Of course you don't need to give operational definitions/interpretations for mathematical auxiliary quantities like proper vertex functions or ##n##-point functions, which are just used to calculate observable quantities like S-matrix elements or cross sections, which of course have an operational meaning within the standard statistical interpretation of Q(F)T.

You did NOT give an operational meaning to your expectation values, which you claim to be what's measured, which is not true for the most simple cases like a SG experiment. There you find two spots on the screen, which are in 1-to-1 correspondence with determined spin components. You do not find one spot corresponding to the expectation value of the prepared spin component, which is 0.
 
  • Like
Likes physicsworks
  • #239
I'm afraid I also can't make sense of what Arnold Neumaier says about Stern-Gerlach experiments. I think that probability is a much more fundamental concept than "measurement" (or "q-expectations"). It is never quite clearly explained what it is that is being measured. Probability is a basic ingredient of many physical theories (much like geometry), wheras I see it as an aberration that so many physicists should think of measurement as playing an essential role in quantum theory. It's absurd to think that the quantum processes in the interior of the sun depend on "measurements" to become real.
 
  • Like
Likes vanhees71
  • #240
vanhees71 said:
There you find two spots on the screen, which are in 1-to-1 correspondence with determined spin components.
Yes, and the individual observations are not reproducible (hence should not be called measurements) but are random macroscopic events happening near these spots, with a mean of zero. Just as the physically meaningful and measurable center of mass of a material torus is at a point outside the torus. Nothing peculiar.

vanhees71 said:
Of course you don't need to give operational definitions/interpretations for mathematical auxiliary quantities like proper vertex functions or n-point functions, which are just used to calculate observable quantities like S-matrix elements or cross sections, which of course have an operational meaning within the standard statistical interpretation of Q(F)T
For the same reason I do not need to give operational definitions for quantum expectations in general, which are used for many purposes. In particular, they are used to define observable quantities in those cases where they have an operational meaning - namely
  1. as the nonstatistical values of macroscopic observables and,
  2. in case of random events, as the statistical expectation values of discrete but random observations, measurable in the same way as the statistical expectation values of classical random observations, namely by taking the mean of the individual observations.
In case 2., the randomness is derived rather than postulated.

Whereas the statistical interpretation has to postulate rather than derive randomness to get 2., and has to find mock reasons to give the single nonstatistical values in 1. a statistical interpretation in terms of a fictitious Gibbs ensemble of unperformed measurements with nonexistent measurement results.

You prefer the statistical interpretation; but for me it does not make sense in case 1, which, for example, determines everything we can observe about the quantum processes in the interior of the sun.

WernerQH said:
It's absurd to think that the quantum processes in the interior of the sun depend on "measurements" to become real.

So how do you explain the latter in terms of probability without measurement?
 
  • #241
WernerQH said:
It's absurd to think that the quantum processes in the interior of the sun depend on "measurements" to become real.

Isn't it equally absurd that quantum processes on the Earth depend on measurements to become real?
 
  • Like
Likes WernerQH
  • #242
A. Neumaier said:
So how do you explain the latter in terms of probability without measurement?
It's hard to know what kind of explanation you will find satisfactory. Surely you know that the collision of two energetic protons produces a a deuteron with a very small probability, and quantum theory permits us to calculate the cross section for that. I consider protons and deuterons real, and I find quantum theory a satisfactory microscopic theory. What do you need the concept of "measurement" for?
 
  • #243
I give up. I don't understand the logic behind the idea of the "thermal interpretation". I guess you can live with it.
 
  • Like
Likes physicsworks and WernerQH
  • #244
WernerQH said:
I'm afraid I also can't make sense of what Arnold Neumaier says about Stern-Gerlach experiments. I think that probability is a much more fundamental concept than "measurement" (or "q-expectations"). It is never quite clearly explained what it is that is being measured. Probability is a basic ingredient of many physical theories (much like geometry), wheras I see it as an aberration that so many physicists should think of measurement as playing an essential role in quantum theory. It's absurd to think that the quantum processes in the interior of the sun depend on "measurements" to become real.
That's a distortion of quantum theory (unless you are a secret Bohmian :oldbiggrin: or Many-Worlder or etc). Who cares whether quantum processes in the interior of the sun are real or not? As Bohr said: "It is wrong to think that the task of physics is to find out how Nature is. Physics concerns what we say about Nature."
 
  • Like
Likes AlexCaledin and Lord Jestocost
  • #245
WernerQH said:
It's hard to know what kind of explanation you will find satisfactory.
I was asking for an explanation that you find satisfactory.

WernerQH said:
Surely you know that the collision of two energetic protons produces a a deuteron with a very small probability, and quantum theory permits us to calculate the cross section for that. I consider protons and deuterons real, and I find quantum theory a satisfactory microscopic theory. What do you need the concept of "measurement" for?
The problem is that inside the sun we do not have an ideal gas of protons but a complicated thermal state, where the reasoning in terms of individual collisions is no longer adequate. The predictions are made using quantum statistical mechanics in terms of Gibbs ensembles - which are by definition ensembles of many hypothetical macrosystems of which only one is realized.

Probability theory enters quantum statistical mechanics only in the definition of quantum expectations, nowhere else. But this definition is based on Born's rule, which (not originally, but in its form fixed since 1927) is a rule for measurement results. Thus to justify this single use of probability theory you need to assume that a large number of measurements are performed. Nobody ever discussed which instruments performed these measurements.

The thermal interpretation avoids all this nonsensical baggage.
 
Last edited:
  • Like
Likes mattt
  • #246
vanhees71 said:
I give up. I don't understand the logic behind the idea of the "thermal interpretation". I guess you can live with it.
Sad to hear. Your current discussions with A. Neumaier helped me to see more clearly how the thermal interpretations resolves paradoxes I had thought about long before I heard about the thermal interpretation.

My impression with respect to this "please give me an operational definition" request and A. Neumaier's reply "what sort of operational definition would you find satisfactory" is that it is similar to requests for operational definitions of the center of gravity of the sun, the moon, or the Earth in Newton's theory. If you apply it with respect to point particles, then those centers of gravity are the things that the theory talks about, and to which its predictions apply. But you cannot directly observe the center of gravity of the earth. You could instead observe the moon and how it orbits the earth, to indirectly observe it. But even there, you don't directly observe its center of gravity (which would give you the most accurate information), but only an approximation to it that includes a certain fundamental uncertainty. But A. Neumaier still has to insist that "the state of the system" determines those centers of gravity, because that is what the theory talks about.
 
  • #247
The problem also is he is contradicting himself all the time. Just in his last posting all of a sudden he admitted that he needs probabilities to define his q-expectations. All the time he denied that this standard definition is abandoned in this thermal interpretation. The arguments get in circles for years, and it seems to be impossible to communicate even the problem with abandoning probabilities from the interpretation.
 
  • Like
Likes physicsworks
  • #248
vanhees71 said:
all of a sudden he admitted that he needs probabilities to define his q-expectations.
No. Probabilities are not needed to define quantum expectations, only the trace formula figures in the (purely mathematical) definition. But in case 2., where many measurement results on identically prepared systems are available, the probabilities used in classical statistics to define expectation values reproduce the quantum expectation values defined by the trace formula.

vanhees71 said:
The arguments get in circles for years, and it seems to be impossible to communicate even the problem with abandoning probabilities from the interpretation.
The argument goes in circles for years because you don't pay attention to details in my statements that make a lot of difference to the meaning.

I don't abandon probabilities from the interpretation but only from the foundations (i.e., from what I assume without discussion), and introduce them later as derived entities that can be used when the circumstances admit a statistical interpretation, namely when one has many actual observations so that a frequentist probability makes sense.
 
  • #249
You don't even understand my question. The trace formula is indeed purely mathematical. I ask for the physics! How are your q-expecation values measured? In standard QT with its probabilistic meaning of states and the trace formula that's clear.

Also what's measured is by far not always an expectation value (no matter how you operationally define them) as the example with the two spots (rather than one big spot) in the most simple case of a spin-component measurement a la Stern and Gerlach demonstrates.
 
  • #250
vanhees71 said:
You don't even understand my question. The trace formula is indeed purely mathematical. I ask for the physics! How are your q-expecation values measured? In standard QT with its probabilistic meaning of states and the trace formula that's clear.
You don't understand my answers.

In general, q-expecation values cannot be measured; the thermal interpretation only asserts that they objectively exist (as beables in Bell's sense). But in many cases they can be measured. I gave in post #240 an operational answer for two classes of instances where they have an operational meaning, which you didn't even try to understand. Nor did you tell me which physics is missing!

There is something to be understood in my postings, not only to be argued against!

vanhees71 said:
Also what's measured is by far not always an expectation value (no matter how you operationally define them) as the example with the two spots (rather than one big spot) in the most simple case of a spin-component measurement a la Stern and Gerlach demonstrates.
In the thermal interpretation, what's measured has a different meaning than in the statistical interpretation, since the thermal interpretation rejects to call observations measurements when they are not reproducible. Only reproducible results have scientific meaning, hence deserve to be called measurement results.

What's measured in a Stern-Gerlach experiments are two silver spots composed of many small random events. From these one forms reproducible measurement results - mean (between the spots), standard deviation (large), and response rates to the impinging electron field intensity. These are the numbers that can be compared with the theoretical q-expectations, with very good agreement. Thus the q-expectations have in this case an operational meaning. Nothing is unphysical in this account of the experiment - on the contrary, everything is intuitive.
 
  • Informative
Likes kith
Back
Top