A Evaluate this paper on the derivation of the Born rule

Click For Summary
The discussion revolves around the evaluation of the paper "Curie Weiss model of the quantum measurement process" and its implications for understanding the Born rule in quantum mechanics. Participants express interest in the authors' careful analysis of measurement processes, though some raise concerns about potential circular reasoning in deriving the Born rule from the ensemble interpretation of quantum mechanics. The conversation highlights the relationship between state vectors, probability, and the scalar product in Hilbert space, emphasizing the need for a clear understanding of measurement interactions. There is also skepticism regarding the applicability of the model to real experimental setups, with calls for more precise definitions and clarifications of the concepts involved. Overall, the discourse reflects a deep engagement with the complexities of quantum measurement theory.
  • #241
I never ever have seen an operator in a physics lab, and my experimental colleagues measure observables, defined by appropriate measurement procedures. I don't know, why you resist this simple fact of how physics is done.

I really described very clearly that also macroscopic observables are described as any observable by a self-adjoint operator in Hilbert space and further used your own rule, how to predict measurements of these observables for the typical case of macroscopically determined states, taking the expectation value and argued, why the fluctuations around this mean value under this circumstances are exspected to be small compared to the macroscopically necessary accuracy. I don't understand why you argue against your own interpretation. Is it only because you are for some incomprehensible reason against the statistical interpretation of the state, i.e., Born's rule? The problem with this is that you are not willing to give a clear physical interpretation of the state. The formalism you give in your book is not clear at all for application in the physics lab!
 
  • Like
Likes Auto-Didact
Physics news on Phys.org
  • #242
vanhees71 said:
I never ever have seen an operator in a physics lab, and my experimental colleagues measure observables
So what? I never ever have seen an observable, though I have done lots of measurements. Mass, distance, momentum, charge, etc. are all invisible.

But there are operators called position, momentum, distance, angular momentum, mass, spin, energy, charge, electric field in a region of space, etc., and these are measured in the lab.

How is the mass of the Earth (or of a distant star) defined in terms of lab measurements? I have never seen it explained anywhere in terms of your "equivalence class of measurement procedures". But every physicist understands the term as a theoretical quantity figuring as a parameter in the gravitational law. Based on that, a number of ways were found to measure it under appropriate conditions.

vanhees71 said:
The formalism you give in your book is not clear at all for application in the physics lab!
What does not apply?

It just needs to be augment it by a dictionary relating the notions in the book to the notions in the lab. This is easily done by telling which instruments prepare and measure what. Such a dictionary is necessary for the application of any language to anything, hence not the fault of my description. Even a book on experimental physics needs this dictionary to be applicable to the lab, unless you assume that the common language is already known. But then I am allowed to assume this as well!
 
Last edited:
  • #243
I don't think that we are able to communicate about the first part in an adequate way.

It is, e.g., very clear how to determine the mass of astronomical bodies from the motion, making use of (post)Newtonian theory. A very amazing exsample of accuracy is "Pulsar Timing":

http://th.physik.uni-frankfurt.de/~hees/cosmo-SS17/pulsar-timing-theorie.pdf

And I'd never say ##\hat{\vec{x}}## "is the postion" but "it's the operator representing position" or shorter "it's the position operator" (the same holds for any observable).

I think the problem is your last paragraph
You just need to augment it by a dictionary relating the notions in the book to the notions in the lab. But this is necessary for the application of any language to anything, hence not the fault of my description. Even a book on experimental physics needs this dictionary to be applicable to the lab, unless you assume that the common language is already known. But then I am allowed to assume this as well!
It is not my task to "provide the dictionary relating the notion in the book to the notions in the lab", because I take the standard way physicists do this for over 90 years no as sufficient, and the relation you ask for is simply the probabilistic meaning of the quantum state according to Born's rule, no more no less. There's no principle distinction between macroscopic and microscopic observables but only in the systems considered and the degree of coarse-graining to be taken as satisfactory accuracy of determining the "relevant" observables.

You deny the probabilistic meaning of the state and define "expectation values" with all the properties of the probabilistic standard meaning but on the other hand just denying this standard way to relate the formalism to the physics in the lab. To convince any physicist of your alternative interpretation, you must give the physical meaning of your mathematics by precisely what you formulated in the quoted paragraph, i.e., you have to "provide the dictionary relating the notion in the book to the notions in the lab".
 
  • Like
Likes Auto-Didact
  • #244
vanhees71 said:
It is not my task to "provide the dictionary relating the notion in the book to the notions in the lab" [...] you have to "provide the dictionary relating the notion in the book to the notions in the lab".
This is easily done by telling which instruments prepare and measure what; no more. I actually know this dictionary; I have more physics education than you may assume. Once this dictionary is set up one can check to which extent theory and experiment agree.

I know that with my thermal interpretation, quantum mechanics and experiment fully agree on the level of thermodynamic measurements. Because (as shown in my book) the probability interpretation can be derived from the thermal interpretation under the appropriate conditions I also know that with my thermal interpretation, quantum mechanics and experiment fully agree for the Stern-Gerlach experiment or for quantum optical experiments.

Thus your claim is wrong that I deny the probabilistic meaning of the state in the cases where such a meaning is appropriate.
 
  • #245
That's great progress! So finally what you get is the standard probabilistic/statistical connection between theory and experiment. So what are we debating after all?
 
  • Like
Likes Auto-Didact
  • #246
vanhees71 said:
So finally what you get is the standard probabilistic/statistical connection between theory and experiment. So what are we debating after all?
I get easily both the standard probabilistic/statistical connection between theory and experiment in cases where it applies (namely for frequently repeated experiments), and the standard deterministic/nonstatistical connection between theory and experiment in cases where it applies (namely for experiments involving only macroscopic variables).

There is no need to assume a fundamental probabilistic feature of quantum mechanics, and no need to postulate anything probabilistic, since it appears as a natural conclusion rather than as a strange assumption about mysterious probability amplitudes and the like that must be put in by hand. Thus it is a significant conceptual advance in the foundations.
 
Last edited:
  • #247
A. Neumaier said:
So you measure once a single operator.
You have never explained what it means to "measure...an operator".
A. Neumaier said:
But Born's rule only applies to an ensemble of measurements, not to a single one.
I strongly disagree. Born's rule tells us the distribution function for all possible results of a single measurement.
 
  • #248
A. Neumaier said:
There is no need to assume a fundamental probabilistic feature of quantum mechanics, and no need to postulate anything probabilistic, since it appears as a natural conclusion rather than as a strange assumption about mysterious probability amplitudes and the like that must be put in by hand.
I disagree with this too. The magnitude of the "amplitude" has a natural interpretation as a distribution function for the simple reason that it is largest for the smallest changes. The "closer" the detected state is to the prepared state the more likely it is to be found. ##P(a|\psi) = \mbox{monotonic function of} \ |<a|\psi>|##.
 
  • #249
mikeyork said:
Born's rule tells us the distribution function for all possible results of a single measurement.
The distribution function means almost nothing for a single measurement.

According to Born's rule, a position measurement gives a real number, and any is possible. Thus Born's rule is completely noninformative.
A number measurement gives according to Born's rule some nonnegative integer, any is possible. Again, Born's rule is completely noninformative.

For a spin measurement, Born's rule is slightly more informative for a single measurement; it tells you that you get either spin up or spin down, but this is all.
That the probability of spin up is 0.1495, say, is completely irrelevant for the single case; it means nothing.

For a measurement of the total energy of a crystal, Born's rule claims that the measurement result is one of a huge but finite number of values, most of them not representable as finite decimal or dual fractions. However, measured is always a result given as a decimal or dual fraction with a small number of digits.
Thus there is even a discrepancy between real measurement and what Born's rule claims.
 
  • Like
Likes dextercioby
  • #250
A. Neumaier said:
The distribution function means almost nothing for a single measurement.
So the distinction between (say) a delta function, a Gaussian or a uniform distribution function means "almost nothing" to you? Tell that to a gambler. Las Vegas loves people who think the games are a lottery. And Wall St loves people who stick pins in a list of stocks.
 
  • #251
vanhees71 said:
It is, e.g., very clear how to determine the mass of astronomical bodies from the motion, making use of (post)Newtonian theory.
Yes. You confirm exactly what I claimed, that the meaning of the observable called mass is determined not by a measurement procedure but by the theory - in your example (post)Newtonian theory. The measurement procedure is designed using this theory, and is known to give results of a certain accuracy only because it matches the theory to this extent.
 
Last edited:
  • #252
mikeyork said:
So the distinction between (say) a delta function, a Gaussian or a uniform distribution function means "almost nothing" to you? Tell that to a gambler. Las Vegas loves people who think the games are a lottery.
It means almost nothing for a single measurement. Gamblers never gamble only once.

At most the support has a meaning for the single case, as restricting the typical values. Not even restricting the possible values!

Note that ''with probability zero'' does not mean ''impossible'' but only that the fraction of realized cases among all tends to zero as the number of measurements goes to infinity. Thus a stochastic process that takes arbitrary values in the first 10^6 cases and zero in all later cases has a distribution function given by a delta function concentrated on zero. In particular, the distribution function is completely misleading for a single measurement of one of the initial cases.

Monte Carlo studies usually need to ignore a long sequence of initial values of a process, before the latter settles to the asymptotic distribution captured by the probabilistic model.
 
  • #253
A. Neumaier said:
It means almost nothing for a single measurement. Gamblers never gamble only once.
Have you ever played poker? Every time it is your turn to act you are faced with a different "one-off" situation (a "prepared state" if you like). You decide what to do based on your idea of the probabilities of what will happen (the "detected state"). Poker players who do not re-assess the situation every time lose a great deal of money.
 
  • Like
Likes vanhees71
  • #254
mikeyork said:
Have you ever played poker?
Poker players never play only once.
mikeyork said:
Poker players who do not re-assess the situation every time lose a great deal of money.
That you need to argue with ''every time'' proves that you are not considering the single case but the ensemble.
 
  • Like
Likes PeterDonis
  • #255
A. Neumaier said:
Poker players never play only once.

That you need to argue with ''every time'' proves that you are not considering the single case but the ensemble.
Of course not. Each single case is a different case. Most poker players never encounter the same situation twice. The ensemble does not exist.

Your argument amounts to the claim that there is no such thing as a "probability", only statistics.
 
  • #256
A. Neumaier said:
I get easily both the standard probabilistic/statistical connection between theory and experiment in cases where it applies (namely for frequently repeated experiments), and the standard deterministic/nonstatistical connection between theory and experiment in cases where it applies (namely for experiments involving only macroscopic variables).

There is no need to assume a fundamental probabilistic feature of quantum mechanics, and no need to postulate anything probabilistic, since it appears as a natural conclusion rather than as a strange assumption about mysterious probability amplitudes and the like that must be put in by hand. Thus it is a significant conceptual advance in the foundations.
I understand QT as the present fundamental symmetry of matter (with the qualification that we don't have a fully satisfactory of the QT of gravitation), and thus it should explain both extreme cases you cite from one theory, and for me the standard minimal interpretation, used to connect the theory with real-world observations/experiments, is very satisfactory, and the key feature is the probabilistic interpretation. It explains both, the meaning of observations on microscopic objects and the quasi-deterministic behavior of macroscopic observables on macroscopic systems. In the latter case, the "averaging" (done in the microscopic case by repeating an experiment many times) is "done" by the measurement apparatus itself. It's a spatial and/or temporal average. All this is well described within the statistical interpretation of the state.

You have a very similar way to define such "averages" in classical electrodynamics applied to optics, where you define the apparently time-independent intensity of light in terms of the classical electromagnetic field by a temporal average. If you follow the history of QT, I think it is fair to say that the original thinking on the meaning of the wave function by Schrödinger came via the analogy with this case. In optics you define the intensity of light as the energy density averaged over typical periods of the em. field (determined by the typical frequency of the emitted em. wave), and these are quadratic forms of the field, like the energy density itself,
$$\epsilon=\frac{1}{2} (\vec{E}^2+\vec{B}^2),$$
or the energy flow,
$$\vec{S}=c \vec{E} \times \vec{B}.$$
(em. energy per area and time; both in Heaviside-Lorentz units).

Schrödinger originally thought of the wave function as a kind of "density amplitude" and its modulus squared as a density in a classical-field sense, but this was pretty early considered a wrong interpretation and lead to Born's probability interpretation, which is the interpretation considered valid today. I still don't understand, why you deny the Born interpretation as a fundamental postulate about the meaning of the quantum state, because it satisfactorily describes both extremes you quote above (i.e., microscopic observations on few quanta and macroscopic systems consisting of very many particles, leading to classical mechanics/field theory as an effective description for the macroscopically relevant observables) and also the "mesoscopic systems" lying somehow in between (like quantum dots in cavity QCD, ultracold rarefied gases in traps including macroscopic quantum phenomena like Bose-Einstein condensation, etc.).
 
  • Like
Likes Auto-Didact
  • #257
A. Neumaier said:
Yes. You confirm exactly what I claimed, that the meaning of the observable called mass is determined not by a measurement procedure but by the theory - in your example (post)Newtonian theory. The measurement procedure is designed using this theory, and is known to give results of a certain accuracy only because it matches the theory to this extent.
Sure, you also need theory to evaluate the masses of the bodies from the observables (like in my example the "pulsar-timing data"). Thus this measurement of masses is clearly among the (amazingly accurate) operational definitions of mass. For other systems you need other measurement procedures (e.g., a mass spectrometer for single particles or nuclei). That's, why I carefully talk about "equivalence classes of measurement protocols" that define the corresponding quantitative observables.

Indeed, the pulsar timing is a very good example for this within a classical (i.e., non-quantum realm). To test General Relativity you can determine the orbital parameters of the binary system from some observables and then deduce other post-Newtonian parameters to check whether they match the prediction from GR as one special post-Newtonian model of gravity. This gives some confidence in the correctness of the deduced values for, e.g., the masses of the two orbiting stars, but indeed this is possible only by giving the operational definition of the measured quantities like these masses to make the connection between theory (GR and post-Newtonian approximations of the two-body system) and observations (pulsar-timing data taken from a real-world radiotelescope).
 
  • #258
A. Neumaier said:
The distribution function means almost nothing for a single measurement.

According to Born's rule, a position measurement gives a real number, and any is possible. Thus Born's rule is completely noninformative.
A number measurement gives according to Born's rule some nonnegative integer, any is possible. Again, Born's rule is completely noninformative.

For a spin measurement, Born's rule is slightly more informative for a single measurement; it tells you that you get either spin up or spin down, but this is all.
That the probability of spin up is 0.1495, say, is completely irrelevant for the single case; it means nothing.

For a measurement of the total energy of a crystal, Born's rule claims that the measurement result is one of a huge but finite number of values, most of them not representable as finite decimal or dual fractions. However, measured is always a result given as a decimal or dual fraction with a small number of digits.
Thus there is even a discrepancy between real measurement and what Born's rule claims.
Born's rule is very informative or doesn't tell you much, dependent on the position-probability distribution given by the state in the way defined by this rule,
$$P(\vec{x})=\langle \vec{x}|\hat{\rho}|\vec{x} \rangle.$$
If this probability distribution is sharply peaked around some value ##\vec{x}_0## it tells you that the particle will be very likely be found in a small volume around this place, and almost never at other places if the system is prepared in this state. If the probability distribution is very broad, the position is pretty much indetermined, and Born's rule indeed doesn't tell much about what to expect for the outcome of a position measurment. Of course, as any probabilistic information, you can verify this information only on an ensemble. But that's what's implied in the "minimal statistical interpretation".

I don't understand the last paragraph of your quote. Of course you need an apparatus with sufficient accuracy to resolve the single possible values of an observable with a discrete spectrum like spin. Whether or not you can achive this is a question of engineering a good enough measurement device but not a fundamental problem within the theory.
 
  • #259
mikeyork said:
The ensemble does not exist.
The ensemble exists as an ensemble of many identically (by shuffling) prepared single cases. Just like identically prepared electrons are 'realized' in the measurement as different results.
 
  • Like
Likes PeterDonis
  • #260
vanhees71 said:
I still don't understand, why you deny the Born interpretation as a fundamental postulate about the meaning of the quantum state
Because, as discussed in the other thread, there are a host of situations where Born's rule (as usually stated) does not apply, unless you interpret it (as you actually do) so liberally that any mention of probability in quantum mechanics counts as application of Born's rule. You yourself agreed that measuring the total energy (relative to the ground state) does not follow the letter of Born's rule.
vanhees71 said:
you can verify this information only on an ensemble.
The information in the statement itself is only about the ensemble since a given single case (only one measurement taken) just happened, whether it is one of the rare cases or one of the frequent ones.
vanhees71 said:
Of course you need an apparatus with sufficient accuracy to resolve the single possible values of an observable with a discrete spectrum like spin. Whether or not you can achieve this is a question of engineering a good enough measurement device but not a fundamental problem within the theory.
So you say that Born's rule is not about real measurements but about fictitious (sufficiently well resolved) idealizations of it! But this is not what the rule says. It claims to be valid about each measurement, not only about idealizations!
 
  • #261
A. Neumaier said:
The ensemble exists as an ensemble of many identically (by shuffling) prepared single cases. Just like identically prepared electrons are 'realized' in the measurement as different results.
Apparently you have never played poker. Apart from all the possible hands of cards there are all the other players at the table, their body language, the position of the dealer, the betting history and the stack sizes. As I said, most poker players never encounter the same situation twice. They merely look for similarities and possibilities and make an assessment based on their limited abilities every single hand they play.

In fact, this problem exists in all physical situations. Even every toss of a coin is a different event. No two events are ever exactly the same except in the limited terms we choose to describe them -- and that even includes your "identically prepared electrons" which, at the very least, differ in terms of the time (and therefore the environmental conditions) at which they are prepared.

My point remains: it is probability which is the fundamentally useful concept. Statistics are derivative and based on a limited description that enables counting of events where the differences are ignored.
 
  • #262
mikeyork said:
, most poker players never encounter the same situation twice.
That just means that poker is a more complex probabilistic system than a quantum spin, which has only 2 possible situations.

The paths of two Brownian particles are also never the same, but still Brownian motion is described by an ensemble. A million games of poker are in essence no different from a million paths of a Brownian particle; only the detailed model is different.
 
  • #263
A. Neumaier said:
That just means that poker is a more complex probabilistic system than a quantum spin, which has only 2 possible situations.
Yes, it's a probabilistic system.
A. Neumaier said:
The paths of two Brownian particles are also never the same, but still Brownian motion is described by an ensemble.
No. Every Brownian particle is described by a distribution function (i.e. probability). "Never the same" and "ensemble" are mutually contradictory. We make the ensemble approximation by (1) ignoring the differences and (2) the large number rule.
 
  • Like
Likes Auto-Didact and RockyMarciano
  • #264
mikeyork said:
it is probability which is the fundamentally useful concept. Statistics are derivative

What is your fundamental definition of "probability" as a concept, if it is not based on statistics from ensembles?
 
  • #265
PeterDonis said:
What is your fundamental definition of "probability" as a concept, if it is not based on statistics from ensembles?
Look at my post #3. Probability is associated with frequency counting, but it doesn't have to be defined that way. Probability is a theoretical quantity that can be mathematically encoded in many ways (QM provides one such encoding in the magnitude of the scalar product ##|<a|\psi>|##); we just require that we be able to calculate asymptotic relative frequencies with it.

We can never actually measure probability by statistics because we cannot have an infinite number of events (even if we ignore differences that we think are unimportant).
 
  • Like
Likes Auto-Didact and RockyMarciano
  • #266
mikeyork said:
We can never actually measure probability [...]
.
Which makes it pretty useless, actually. Operational definitions are at least practically relevant.
 
  • Like
Likes PeterDonis
  • #267
Mentz114 said:
Which makes it pretty useless, actually. Operational definitions are at least practically relevant.
So let's give up on theory? All that stuff about Hilbert spaces is useless guff?

Professional poker players should retire?
 
  • #268
A really interesting practical example of the failure of statistics was the 2008 financial crash. Although there were many contributory factors, the single most critical mathematical factor was the assumption that probabilities could be deduced from statistics. The particular model that was faulty was "Geometrical Brownian Motion" -- the assumption that log prices were normally distributed, so that on only had to measure the first two moments (mean and variance).

More generally, a finite number of events can only tell you a finite number of moments, yet the higher moments of the underlying distribution function (probability) might be infinite. In 2008, this manifested in the phenomenon of "fat tails".

The same false assumption of a Gaussian distribution function was responsible for the demise of Long Term Capital Management in 1998.
 
  • Like
Likes RockyMarciano
  • #269
You misunderstand me again. Of course you can apply statistics and Born's rule also to inaccurate measurements, but as stated usually it's about precise measurements, and I don't think that it helps to resolve our disagreement with introducing more and more complicating but trivial issues into the discussion before the simple cases are resolved.

You still don't give a clear explanation for your claim that Born's rule doesn't apply. If this was the case that would imply that you can clearly disprove QT by a reproducible experiment. AFAIK that's not the case!
 
  • #270
mikeyork said:
A really interesting practical example of the failure of statistics was the 2008 financial crash. Although there were many contributory factors, the single most critical mathematical factor was the assumption that probabilities could be deduced from statistics. The particular model that was faulty was "Geometrical Brownian Motion" -- the assumption that log prices were normally distributed, so that on only had to measure the first two moments (mean and variance).

More generally, a finite number of events can only tell you a finite number of moments, yet the higher moments of the underlying distribution function (probability) might be infinite. In 2008, this manifested in the phenomenon of "fat tails".

The same false assumption of a Gaussian distribution function was responsible for the demise of Long Term Capital Management in 1998.
Well, here probability theory and statistics as you describe it were failing simply, because the assumptions of a certain model were wrong. It's not a failure of the application of probability theory per se. Hopefully, the economists learned from their mistakes and refine their models to better describe the real world. That's how empirical sciences work! If a model turns out to be wrong, you try to substitute it by a better one.
 

Similar threads

  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 0 ·
Replies
0
Views
1K
  • · Replies 54 ·
2
Replies
54
Views
5K
Replies
48
Views
6K
Replies
58
Views
4K
Replies
31
Views
3K
Replies
47
Views
5K
  • · Replies 13 ·
Replies
13
Views
6K