QM Eigenstates and the Notion of Motion

In summary: So in which sense do you think that "something is moving" even if the system is in an energy eigenstate?The disagreement is not about energy eigenstates being stationary states. It is about whether something is moving in a state with a non-zero expectation value for the velocity operator. That is, whether motion can be attributed to a state beyond just its expectation value.
  • #71
Morbert said:
It's the practice of decision making I'd assume. Even if you only play Russian roulette once, it's better to play with one bullet in the chamber than with 3. Usually this kind of decision making is connected to measurement with the notion of bets. Qbists see probabilities as instructions for betting on measurement outcomes.
Yes, I understand this, but physical experiments are not about decision making but to figure out, how Nature behaves, and there I need to gain "enough statistics" in any experiment.

As an application take the weather forecast: If they say, tomorrow there's a 30% probability for rain, it's clear that this means that experience (and models built on this experience) tells us that in about 30% of equal or sufficiently similar weather conditions it will rain, and I can base a decision on this whether I take the risk to plan a garden party or not, but to figure out reliable probability estimates with some certainty you need to repeat the random experiment and use the frequentist interpretation of probabilities, and that's also well established within probability theory ("central limit theorem" etc.).
 
Physics news on Phys.org
  • #72
vanhees71 said:
As an application take the weather forecast: If they say, tomorrow there's a 30% probability for rain, it's clear that this means that experience (and models built on this experience) tells us that in about 30% of equal or sufficiently similar weather conditions it will rain, and I can base a decision on this whether I take the risk to plan a garden party or not, but to figure out reliable probability estimates with some certainty you need to repeat the random experiment ...
It is not at all clear what a 30% probability for rain tomorrow means. (And the meaning of sufficiently similar feels quite subjective to me.) Say another weather forecaster works with another model and another systematic, and predicts a 33% probability instead. And yet another one has a systematic that can only predict 0%, 50%, and 100%, and predicts a 0% probability for rain tomorrow. How can you compare their predictions, and decide whose predictions are better or worse? Let's say you try to evaluate all three forecasters over a period of 3 months, i.e. based on approx. 90 prediction by each forecaster. Is there some "objective" way to do this?

"The Art of Statistics" by David Spiegelhalter is a nice book, which also gives an overview about how this type of problem gets approached in practice. One nice way is to compute their Brier score. I find this nice, because it is about the simplest score imaginable, and it has the nice property that predictions of 0% or 100% probability are not special in any way. This reminds me on David Mermin's QBist contribution, that prediction with 0% or 100% probability in quantum mechanics are still personal judgments, and don't give any more objectivity than any other prediction.
 
  • Like
Likes dextercioby
  • #73
Morbert said:
If motion of a subsystem is stipulated to be the change in expectation value of the kinetic energy operator of the centre of mass of that subsystem, then everything is consistent, though it's not clear how this maps to the intuitive notion of motion as change in position, since position is no longer an objective property of the stationary quantum system like it is in classical physics.
A subsystem simply has a smaller observable algebra, and its state is therefore the state obtained by tracing out all the other degrees of freedom.

A nonrelativistic system consisting of ##N## distinguishable particles has ##N## position vectors ##q_i## and ##N## momentum vectors ##p_i##, whose components belong to the observable algebra. A subsystem simply has fewer particles. Their uncertain position and momentum is given by the associated quantum expectation, with uncertainty given by their quantum deviation.

Motion is as intuitive as in the classical case, except that the world lines are now fuzzy world tubes because of the inherent uncertainty in observing quantum values. For example, an electron in flight has an extremely narrow world tube, while an electron orbiting a hydrogen atom in the ground state has a fuzzy world tube along the world line of the nucleus, with a fuzziness of the size of the Compton wavelength. I find this very intuitive!
 
  • Like
Likes Simple question and Morbert
  • #74
gentzen said:
It is not at all clear what a 30% probability for rain tomorrow means. (And the meaning of sufficiently similar feels quite subjective to me.) Say another weather forecaster works with another model and another systematic, and predicts a 33% probability instead. And yet another one has a systematic that can only predict 0%, 50%, and 100%, and predicts a 0% probability for rain tomorrow. How can you compare their predictions, and decide whose predictions are better or worse? Let's say you try to evaluate all three forecasters over a period of 3 months, i.e. based on approx. 90 prediction by each forecaster. Is there some "objective" way to do this?
The interpretation given by
vanhees71 said:
As an application take the weather forecast: If they say, tomorrow there's a 30% probability for rain, it's clear that this means that experience (and models built on this experience) tells us that in about 30% of equal or sufficiently similar weather conditions it will rain
can be made quite precise, and allows one to rate the quality of different prediction algorithms. Of course over the long run (which is what counts statistically), one algorithm can be quite accurate (and hence is trustworthy) while another one can be far off (and hence is not trustworthy).

The precise version of the recipe by @vanhees71is the following:

For an arbitrary forecast algorithm that predicts a sequence ##\hat p_k## of probabilities for a sequence of events ##X_k##,
$$\sigma:=\sqrt{mean_k((X_k-\hat p_k)^2)}\ge \sigma^*:=\sqrt{mean_k((p_k-\hat p_k)^2)},$$
where ##p_k## is the true probability of ##X_k##.

##\sigma## is called the RMSE (root mean squared error) of the forecast algorithm, while ##\sigma^*## is the unavoidable error. The closer ##\sigma## is to ##\sigma^*## the better the forecast algorithm.

Of course, in complex situations the unavoidable error ##\sigma^*## is unknown. Nevertheless choosing among the forecast algorithms available for forcasting the one with the smallest RMSE (based on predictions the past) is the most rational choice. Nothing subjective is left.
 
Last edited:
  • #75
A. Neumaier said:
Motion is as intuitive as in the classical case, except that the world lines are now fuzzy world tubes because of the inherent uncertainty in observing quantum values. For example, an electron in flight has an extremely narrow world tube, while an electron orbiting a hydrogen atom in the ground state has a fuzzy world tube along the world line of the nucleus, with a fuzziness of the size of the Compton wavelength. I find this very intuitive!
It's the meaning of these world lines and tubes that I am curious about. If we say that, in classical physics, normalised density operators are to replace phase space variables as the objective properties of the system, then if I have a toy ball on a spring, the real properties of the ball are not its position and momentum, but rather those properties for which, under the right condition, statistical properties are arbitrarily close.

But maybe this is fine, and the objective properties of classical theories are also density operators but with zero quantum deviation. Maybe there is a future paper "Classical Tomography Explains Classical Mechanics"
 
  • #76
Morbert said:
It's the meaning of these world lines and tubes that I am curious about. If we say that, in classical physics, normalised density operators are to replace phase space variables as the objective properties of the system,
In classical physics, phase space is of course classical, and world lines are curves, not fuzzy tubes. The relevant limit in which physics becomes classical is the macroscopic limit ##N\to \infty##, and the law of large numbers then provides the right limiting behavior.

[To achieve a good formal (but unphysical) classical limit for microsystems one would have to scale both ##\hbar## and the uncertainties by factors that vanish in the limit.]
 
Last edited:
  • #77
A. Neumaier said:
The precise version of the recipe by @vanhees71is the following:
It might be a precise recipe, but it is certainly not what vanhees71 is claiming. His claim is simply that there is no problem at all with probabilities for single non-reproducible events. And the weather tomorrow seems to me to be such a non-reproducible event. If it is still "too reproducible," then you may look at earthquake prediction instead. vanhees71's claim that the ensemble interpretation would be perfectly applicable to such scenarios will remain exactly the same.

From my point of view, the task is to produce some intuitive understanding (for people like vanhees71, who have trouble understanding the difference) for why such scenarios contain elements absent from scenarios where the ensemble interpretation just works. And talking about the randomness of prime numbers would be too far from what such people perceive as randomness. So weather or earthquake prediction seem to me to be the scenarios where you have to try to clarify the difficulties.
 
  • #78
gentzen said:
It might be a precise recipe, but it is certainly not what vanhees71 is claiming. His claim is simply that there is no problem at all with probabilities for single non-reproducible events.
He claims (as I understand him) that there is no problem with the interpretation of probabilities made by some prediction algorithm for arbitrary sequences of events, each one single and non-reproducible. This is what I formalized.
gentzen said:
And talking about the randomness of prime numbers would be too far from what such people perceive as randomness.
But this is also well-defined. There are canonical measures (in the mathematical sense) used in analytic number theory to be able to employ stochastic methods. Nevertheless, everything there is deterministic, and indeed, in my book 'Coherent quantum physics', I gave this as an example showing that stochastic formalisms may have deterministic interpretations.

It is like using vector methods for applications that never employ vectors in their geometric interpretation as arrows.
gentzen said:
So weather or earthquake prediction seem to me to be the scenarios where you have to try to clarify the difficulties.
Weather preditions are made every day, so there is lots of statistics available. Thus
my formalization of the meaning of probability forecasts applies. In particular, one can objectively test prediction algorithms for their quality. For example, the statement ''Nowadays weather forecasts are much better than those 20 years ago'' can be objectively proved.
 
Last edited:
  • #79
WernerQH said:
It's the nature of a harmonic oscillator to perform systematic oscillations, as expressed by its response function.
But it can oscillate with a zero amplitude. A pendulum at small amplitude is a harmonic oscillator. Does it lose this property whenever it stands still at its equilibrium position? I don't think so, because it can be excited again!
 
  • #80
vanhees71 said:
I need to gain "enough statistics"
The idea is that in a real game with an agent beeing a participant , it does't have the option to await nor process arbitrary amounts of data. Ie. there is a race condition.

For the external observer however, asymptotic data and in principle any requires postprocessing can be realized. Then "enough statistics" can be realized.

vanhees71 said:
Yes, I understand this, but physical experiments are not about decision making but to figure out, how Nature behaves, and there I need to gain "enough statistics" in any experiment.
The problem is when "nature" changes before we have "enough statistics". This is why the paradigm you describes works for small subsystems, when in some sense we can realize these repeats. Because then the timescale of the measurement AND postprocessing is small, relative to the lifetime of the observing context. In this case it seems natural that given enough prostprocessing, SOME patterns WILL be found, or can be "invented". Then we can see these as "timeless", but it's just because it's "timeless" relative to the timescale of the process, which in this case is small.

But this all breaks down for "inside observers", where one is put in a cosmological perspective, and the observer is necessarily saturated with more data it can store and process in realtime. But this is also IMO when things get more interesting!

Only asymptotic observables does not seem very interesting.

/Fredrik
 

Similar threads

  • Advanced Physics Homework Help
Replies
6
Views
799
  • High Energy, Nuclear, Particle Physics
Replies
34
Views
3K
Replies
2
Views
575
  • Quantum Interpretations and Foundations
Replies
10
Views
1K
  • Introductory Physics Homework Help
Replies
6
Views
7K
  • Advanced Physics Homework Help
Replies
1
Views
2K
  • Advanced Physics Homework Help
Replies
1
Views
2K
  • Quantum Physics
Replies
1
Views
947
  • Advanced Physics Homework Help
Replies
11
Views
13K
  • Advanced Physics Homework Help
Replies
6
Views
1K
Back
Top