• #76
A. Neumaier
Science Advisor
Insights Author
8,026
3,893
One can do statistics using a single particle in, e.g., a Penning trap, as described here:
https://doi.org/10.1088/0031-8949/1988/T22/016
but isn't this indeed a paradigmatic example for your formulation?
One can do statistics with any collection of measurement results.
But in the case you mention, where the data come from a single particle, the statistics is not governed by Born's rule. Each data point is obtained at a different time, and at each time the particle is in a different state affected in an unspecified way by the previous measurement. So how could you calculate the statistics from Born's rule?

Instead, the statistics is treated in the way I discussed in case (A).
Also nondestructive photon measurements are done,
If the nondestructive single photon measurements result in a time series, the situation for this photon is the same as for the particle in the Penning trap.
but also the standard photon detection of course measures properties of single photons like energy, momentum, and polarization, or what else do you think the photon measurements in all the accelerators in HEP and heavy-ion physics provide?
I didn't know that accelerators measure momentum and polarization of individual photons. Could you provide me with a reference where I can read details? Then I'll be able to show you how it matches the description in my paper.
 
Last edited:
  • #77
vanhees71
Science Advisor
Insights Author
Gold Member
2021 Award
19,495
10,251
I wouldn't call results of experiments with single electrons, protons, ions etc. in Penning traps which are among the most precise ever "of limited precision" ;-). The tgeoretical description uses standard quantum theory based on Born's rule (see Dehmelt's above quoted review).

Detectors measure particles and photons of coarse. Real and virtual photons (dileptons) are among the most interesting signals in pp, pA, and heavy-ion collisions at CERN for some decades.
 
  • #78
Fra
3,476
273
I prefer to frame the known in an optimally rational way, rather than to speculate about the unknown.
This is certainly respectable position, and I think your exposition is good from this position.

With what "is known" I think you effectively refer to human science. But if we even here consider and obsererver: What is the real difference between what an observers knows, and what it THINKS it knows? And does it make difference to the observes betting strategy? (action)

It doesn't matter who observes, excpt that poor observations lead to poor approximations of the state.
Just to contrast: In an interacting agent view, "poor approximations" should lead to "poor actions", "poor betting" which should be observable by other agents. My take on this is different. As I see it, the inside agent/observer has no access to an external judgement of what is a good or bad approximation. The agent just has to act on available evidence. What is right or wrong, poor or precise, should be irrelevant from the perspective of the agents betting. So the causal mechanism is independent of wether the information is "right". Information as well as desinformation will provoke a response which is depedent of the subjective information . But these ideas are IMO part of the non-equilibrium parts. Ie. an agent that is consisently "wrong", will soon be put out of business in the overall game. So instead of thinking of a ordinary equilibration, I see it as a evolution (as state spaces also evolve, there is no objective entropic flow). During this process, agents has two choices, learn (improve their predictions) or face destruction (deleted from the population pool). In this process lies also the emergence of symmetries. I seek and struggle with the mathematical or rather, algorithmic description on this. This is admittedly more speculative though. But I have the opinion that "speculation" and revision from feedback is at the heart of true learning. And I take this seriously also applied to the "observer". Even a measurement, can be seen as a "speculation" considering the choice of WHICH measurement to perform (in order for say maximal information gain).

Can we infer Vanhees secret speculations about he hopes to find(that you don't say loud as scientists should not be biased;), from the way the chooses to construct the next measurement or experiment?

This is why I can not help viewing, current QM as a limiting case of a very massive dominant agent, which is for all practical purposes classical and provides the background. I see that both the "preparation", and "detectors" are constructed from the agent itself. The information gained, is between the "action"(which can be seen as a "preparation") and the "backreaction of the environment", which I abstractly see as the correspondence of a general measurement of a real inside observer.

I feel that, trying to "close" or "polish" the limit theory case has a big value in itself, but it also risks polish away the open ends that are the clues to progress, and I prefers the open ends as clues forward.

/Fredrik
 
Last edited:
  • #79
A. Neumaier
Science Advisor
Insights Author
8,026
3,893
With what "is known" I think you effectively refer to human science. But if we even here consider and obsererver: What is the real difference between what an observers knows, and what it THINKS it knows? And does it make difference to the observes betting strategy? (action)
Science has no single betting strategy. Each scientist makes choices of his or her own preference, but published is only what passed the rules of scientific discourse, which rules out most poor judgment on the individual's side. What schience knows is an approximation to what it thinks it knows, and this approximation is quite good, otherwise resulting technology based on it would not work and not sell.
 
  • Like
Likes Fra, dextercioby and vanhees71
  • #80
A. Neumaier
Science Advisor
Insights Author
8,026
3,893
I wouldn't call results of experiments with single electrons, protons, ions etc. in Penning traps which are among the most precise ever "of limited precision".
I didn't call these results "of limited precision" but said that they determine the state to limited precision only. The state in these experiments is a continuous stochastic function ##\rho(t)## of time with ##d^2-1## independent real components, where ##d## is the dimension of the Hilbert space. Experiments resulting in ##N## measured numbers can determine this function only to limited precision. By the law of large numbers, the error is something like ##O((dN^{1/2})^{-1})##.

What is usually done is to simply assume a Lindblad equation (which ignores the fluctuating part of the noise due to the environment) for a truncated version of ##\rho## with very small ##d##. Then one estimates from it and the experimental results a very few parameters or quantum expectations.

This is very far from an accurate state determination....
The theoretical description uses standard quantum theory based on Born's rule (see Dehmelt's above quoted review).
Since Born's rule is a statement about the probability distribution of results for an ensemble of identically prepared systems, it is logically impossible to obtain from it conclusions about a single of these systems. A probability distribution almost never determines an individual result.

I'll read the review once I can access it and then comment on your claim that it derives statements about a single particle from Born's rule.
Detectors measure particles and photons of coarse. Real and virtual photons (dileptons) are among the most interesting signals in pp, pA, and heavy-ion collisions at CERN for some decades.
Then it should be easy for you to point to a page of a standard reference describing how the measurement of photon momentum and polarization in collision experiments is done, in sufficient detail that one can infer the assumptions and approximations made. I am not an expert on collision experiments and would appreciate your input.
 
Last edited:
  • Like
Likes gentzen and vanhees71
  • #81
DrDu
Science Advisor
6,210
866
That's a nice article. However I somehow miss an explanation, what actually is meant with "quantum tomography" and one has to revert to the arxiv preprint to get an explanation. Given the title of the insights article, maybe you could add some words on what is meant with quantum tomography.
 
  • Like
Likes dextercioby
  • #82
vanhees71
Science Advisor
Insights Author
Gold Member
2021 Award
19,495
10,251
I didn't call these results "of limited precision" but said that they determine the state to limited precision only. The state in these experiments is a continuous stochastic function ##\rho(t)## of time with ##d^2-1## independent real components, where ##d## is the dimension of the Hilbert space. Experiments resulting in ##N## measured numbers can determine this function only to limited precision. By the law of large numbers, the error is something like ##O((dN^{1/2})^{-1})##.

What is usually done is to simply assume a Lindblad equation (which ignores the fluctuating part of the noise due to the environment) for a truncated version of ##\rho## with very small ##d##. Then one estimates from it and the experimental results a very few parameters or quantum expectations.

This is very far from an accurate state determination....

Since Born's rule is a statement about the probability distribution of results for an ensemble of identically prepared systems, it is logically impossible to obtain from it conclusions about a single of these systems. A probability distribution almost never determines an individual result.

I'll read the review once I can access it and then comment on your claim that it derives statements about a single particle from Born's rule.

Then it should be easy for you to point to a page of a standard reference describing how the measurement of photon momentum and polarization in collision experiments is done, in sufficient detail that one can infer the assumptions and approximations made. I am not an expert on collision experiments and would appreciate your inpu.
In Dehmelt's paper it is describe, how various quantities using single electrons/ions in a Penning trap are measured. I still don't understand, why you think there cannot be statistics collected using a single quantum. I can also get statics of throwing a single coin again and again to check whether it's a fair one or not. I just do the "random experiment" again and again using the same quantum and collect statistics and evaluate confidence levels and all that. Another review paper, which may be more to the point, because it covers both theory and experiment, is

https://doi.org/10.1103/RevModPhys.58.233

I also think that very rarely one does full state determinations. What's done are preparations and subsequent measurements of observables of interest.

I'm also not an experimental physicist and far from knowing any details, how the current CERN experiments (ATLAS, CMS, and ALICE) measure electrons and photons. I use their results to compare to theoretical models, which are based on standard many-body QFT and simulations of the fireball created in heavy-ion collisions. All this is based on standard quantum theory and thus after all on Born's rule. Here you can look at some papers by the ALICE collaboration as one example for what's measured concerning photons created in pp, pA, and AA collisions (pT spectra, elliptic flow, etc.). Concerning polarization measurements (particularly for dileptons) that's a pretty new topic, and of course an even greater challenge than the spectra measured for decades now. After all these are "rare probes".
 
  • #83
A. Neumaier
Science Advisor
Insights Author
8,026
3,893
I still don't understand, why you think there cannot be statistics collected using a single quantum.
I don't think that, and I explicitly said this. The point is that this statistics is not statistics about an ensemble of identically prepared systems hence has nothing to do with what Born's rule is about.
I can also get statistics of throwing a single coin again and again to check whether it's a fair one or not.
In this case the system identically prepared is the throw, not the coin. The coin is a system described by a rigid body, with a 12D phase space state ##z(t)##, in contact with an environment that randomizes its motion through its collision with the table. The throw is what you can read off when the coin is finally at rest.

The state of the coin is complicated and cannot be identically prepared (otherwise it would fall identically and produce identical throws);. But the state of the throw is simple - just a binary variable, and the throwing setup prepares its state identically. Each throuw is different - only the coin is the same; that's why one gets an ensemble.

This is quite different from a quantum particle in a trap, unless (as in a throw) you reset before each measuement the state of the particle in the trap. But then the observation bevomes uninteresting. The interesting thing is to observe the particle's time dependence. Here the state changes continuously, as with the coin and not as with the throw.
 
Last edited:
  • #84
A. Neumaier
Science Advisor
Insights Author
8,026
3,893
All this is based on standard quantum theory and thus after all on Born's rule.
The 'thus' is not warranted.

Quantum field theory is completely independent of Born's rule. It is about computing ##N##-point functions of interest.

Weinberg's QFT book (Vol.1) mentions Born's rule exactly twice - once in its review of quantum mechanics, and once where the probabilistic interpretation of the scatttering amplitude is derived. In the latter he assumes an ensemble of identically prepared particles to give a probabilistic meaning in terms of the statistics of collision experiment.

Nothing at all about single systems!
 
  • #85
vanhees71
Science Advisor
Insights Author
Gold Member
2021 Award
19,495
10,251
I don't think that we reach consensus about this issue. For me Born's rule is one of the fundamental postulates of QT (including QFT). You calculate the correlation functions (Green's functions) in QFT to get statistical information about observables like cross sections. How these correlation functions are related to the statistics of measurement outcomes is derived based on the fundamental postulates of QT, including Born's rule. Of course, that's what Weinberg and any other book on QFT does. A cross-section measurement consists of course always of collecting statistics over very many collision events using not the same particles again and again.

You use yourself Born's rule all the time since everything is based on taking averages of all kinds defined by ##\langle A \rangle=\mathrm{Tr} \hat{\rho} \hat{A}## (if you use normalized ##\hat{\rho}##'s).
 
Last edited:
  • Like
Likes Lord Jestocost
  • #86
A. Neumaier
Science Advisor
Insights Author
8,026
3,893
Another review paper, which may be more to the point, because it covers both theory and experiment, is
https://doi.org/10.1103/RevModPhys.58.233
[...] you can look at some papers by the ALICE collaboration as one example for what's measured concerning photons created in pp, pA, and AA collisions (pT spectra, elliptic flow, etc.). Concerning polarization measurements (particularly for dileptons) that's a pretty new topic,
Are the papers where I can read about ALICE measurements and about polarization measurements in the above review?
 
  • #87
vanhees71
Science Advisor
Insights Author
Gold Member
2021 Award
19,495
10,251
  • #88
A. Neumaier
Science Advisor
Insights Author
8,026
3,893
That's a nice article. However I somehow miss an explanation, what actually is meant with "quantum tomography" and one has to revert to the arxiv preprint to get an explanation. Given the title of the insights article, maybe you could add some words on what is meant with quantum tomography.
Thanks. I added to the Insight article a link to Wikipedia and an explaining paragraph.
 
  • #89
A. Neumaier
Science Advisor
Insights Author
8,026
3,893
I was referring to the measurements on single particles in a trap, not on ALICE photon measurements. There are tons of papers about "direct photons":

https://inspirehep.net/literature?sort=mostrecent&size=25&page=1&q=find title photons and cn alice

Polarization measurements for dileptons or photons are very rare today. There's a polarization measurement by the NA60 collaboration on di-muons:

https://arxiv.org/abs/0812.3100
Thanks for the pointers. Will reply in more detial after having read more. I expect that it will mean that the instancs of case (B) are not so different from those of case (A) in my earlier classification of single-particle measurements.
 
  • #90
vanhees71
Science Advisor
Insights Author
Gold Member
2021 Award
19,495
10,251
I still do not understand why you say that the content of the review papers by Dehmelt and Brown contain anything denying the validity of Born's rule. For me it's used all the time!
 
  • #91
A. Neumaier
Science Advisor
Insights Author
8,026
3,893
I still do not understand why you say that the content of the review papers by Dehmelt and Brown contain anything denying the validity of Born's rule. For me it's used all the time!
Because Born's rule assumes identical preparations which is not the case when a nonstationary system is measured repeatedly. I am not denying the validity but the applicability of the rule!

I need to read the paper before I can go into details.
 
  • #92
vanhees71
Science Advisor
Insights Author
Gold Member
2021 Award
19,495
10,251
I don't understand this argument. You just measure repeatedly some observable. The measurements (or rather the reaction of the measured system to the coupling to the measurement device) themselves of course have to taken into account as part of the "preparation" too.
 
  • #93
A. Neumaier
Science Advisor
Insights Author
8,026
3,893
I don't understand this argument. You just measure repeatedly some observable. The measurements (or rather the reaction of the measured system to the coupling to the measurement device) themselves of course have to be taken into account as part of the "preparation" too.
It is a preparation, but not one to which Born's rule applies. Born's rule is valid only if the ensemble consists of independent and identically prepared states. You need independence because e.g., immediately repeated position measurements of a particle do not respect Born's rule, and you need identical prepartion because there is only one state in Born's formula.

In the case under discussion, one may interpret the situation as reeated preparation, as you say. But unless the system is stationary (and hence uninteresting in the context of the experiment under discussion), the state prepared before the ##k##th measurement is different for each ##k##. Moreover, due to the preceding measurement this state is only inaccurately known and correlated with the preceding one. Thus the ensemble prepared consists of nonindependent and nonidentically prepared states, for which Born's rule is silent.
 
  • Like
Likes dextercioby
  • #94
vanhees71
Science Advisor
Insights Author
Gold Member
2021 Award
19,495
10,251
This would imply that you cannot describe the results about a particle in a Penning trap with standard quantum theory, but obviously that's successfully done for decades!
 
  • #95
A. Neumaier
Science Advisor
Insights Author
8,026
3,893
This would imply that you cannot describe the results about a particle in a Penning trap with standard quantum theory,
This statement is indeed true if you restrict standard quantum theory to mean the formal apparatus plus Born's rule in von Neumann's form. Already the Stern-Gerlach experiment discussed above is a counterexample.
but obviously that's successfully done for decades!
This is because standard quantum theory was never restricted to a particular interpretation of the formalism. Physicists advancing the scope of applicability of quantum theory were always pragmatic and used whatever they found suitable to match the mathematical quantum formalism to particular experimental situations. This - and not what the introductory textbooks tell - was and is the only relevant criterion for the interpretation of quantum mechanics. The textbook version is only a simplified a posteriori rationalization.

This pragmatic approach worked long ago for the Stern-Gerlach experiment. The same pragmatic stance also works since decades for the quantum jump and quantum diffusion approaches to nonstationary individual quantum systems, to the extent of leading to a Nobel prize. They simply need more flexibility in the interpretation than Born's rule offers. What is needed is discussed in Section 4.5 of my paper.
 
  • #96
vanhees71
Science Advisor
Insights Author
Gold Member
2021 Award
19,495
10,251
I don't understand, what the content of Sect. 4.5 has to do with our discussion. I don't see, how you can come to the conclusion that the "pragmatic use" of the formalism contradicts the Born rule as the foundation. To the contrary all these pragmatic uses are based on the probabilistic interpretation of the state a la Born. Also, as I said before, I don't understand how you can say that with a non-stationary source no accuracy is reachable, while the quoted Penning-trap experiments lead to results which are among the most accurate measurements of quantities like the gyro-factor of electrons or, just recently reported even in the popular press, the accurate measurement of the charge-mass ratio of the antiproton.

Nowhere in your paper I can see, that there is anything NOT based on Born's rule, although you use the generalization to POVMS, but I don't see that this extension is in contradiction to Born's rule. Rather, it's based on it.
 
  • #97
Fra
3,476
273
Science has no single betting strategy. Each scientist makes choices of his or her own preference, but published is only what passed the rules of scientific discourse, which rules out most poor judgment on the individual's side. What schience knows is an approximation to what it thinks it knows, and this approximation is quite good, otherwise resulting technology based on it would not work and not sell.
Yes. By a similar reasoning, I think observer/agents that fail to adapt to their environment, will not be ubiquitous. But the fitness is relative to the environment only, just as a learning agent will be "trained" to what it's exposed to. What is true in an absolute sense seems be be about as irrelevant as the absolute space is to relative motion.

/Fredrik
 
  • #98
Fra
3,476
273
Nowhere in your paper I can see, that there is anything NOT based on Born's rule, although you use the generalization to POVMS, but I don't see that this extension is in contradiction to Born's rule. Rather, it's based on it.
As I read this again, I think I also may have confused the "issue" with borns rule. Some objections I have in mind(having todo with the choice of optimal compression), seems to be off topic here, but now it seems that the main point here is the generalized "born rule", it the one relevant for mixed states? But as Vanhees says, the core essence of the "born rule" is still there, right?

/Fredrik
 
  • #99
A. Neumaier
Science Advisor
Insights Author
8,026
3,893
I don't understand, what the content of Sect. 4.5 has to do with our discussion. I don't see, how you can come to the conclusion that the "pragmatic use" of the formalism contradicts the Born rule as the foundation.
I didn't claim a contradiction with, I claimed the nonapplicability of Born's rule. These are two very different claims.
all these pragmatic uses are based on the probabilistic interpretation of the state a la Born.
You seem to follow the magic interpretation of quantum mechanics. Whenever you see statistics on measurements done on a quantum system you cast the magic spell "Born's probability interpretation", and whenever you see a calculation involving quantum expectations you wave a magic wand and say "ah, an application of Born's rule". In this way you pave your way through every paper on quantum physics and say with satisfaction at the end, "This paper proves again what I knew for a long time, that the interpretation of quantum mechanics is solely based on the probabilistic interpretation of the state a la Born".

You simply cannot see the difference between the two statements
  1. If an ensemble of independent and identically prepared quantum systems is measured then ##p_k=\langle P_k\rangle## is the probability occurrence of the ##k##th event.
  2. If a quantum system is measured then ##p_k=\langle P_k\rangle## is the probability occurrence of the ##k##th event.
The first statement is Born's rule, in the generalized form discussed in my paper.
The second statement (which you repeatedly employed in your argumentation) is an invalid generalization, since the essential hypothesis is missing under which the statement holds. Whenever one invokes Born's rule without having checked that the ensemble involved is actually independent and identically prepared, one commits a serious scientific error.

It is an error of the same kind as to conclude from x=2x through division by x that 1=2, because the assumption necessary for the argument was ignored.
Also, as I said before, I don't understand how you can say that with a non-stationary source no accuracy is reachable, while the quoted Penning-trap experiments lead to results which are among the most accurate measurements of quantities like the gyro-factor of electrons or, just recently reported even in the popular press, the accurate measurement of the charge-mass ratio of the antiproton.
This is not a contradiction since both the gyro-factor of electrons and the charge-mass ratio of the antiproton are not observables in the traditional quantum mechanical sense but constants of Nature.

A constant is stationary and can in principle be arbitrarily well measured, while the arbitrarily accurate measurement of the state of a nonstationary system is in principle impossible. This holds already in classical mechanics, and there is no reason why less predictable quantum mechanical systems should behave otherwise.
Nowhere in your paper I can see, that there is anything NOT based on Born's rule, although you use the generalization to POVMS, but I don't see that this extension is in contradiction to Born's rule. Rather, it's based on it.
This is because of your magic practices in conjunction with mixing up "contradition to" and "not applicable". Both prevent you from seeing what everyone else can see.
 
  • Like
Likes dextercioby, Fra and gentzen
  • #100
vanhees71
Science Advisor
Insights Author
Gold Member
2021 Award
19,495
10,251
I think the problem is that I understand something completely different when I read this paper than it's the intention of the authors. Particularly I have no clue, why behind the entire formalism of the description of the outcomes of measurements there should not be Born's rule. For me POVMs are just a description of measurement devices and the corresponding experiments, where one does not perform an ideal von Neumann filter measurement, and it's of course right that only a very few real-world experiment are such ideal von Neumann filter measurements, and a more general description of the experiments that have become possible nowadays (starting roughly with the first Bell tests by Aspect et al).

My understanding of the paper is that it is very close to the view as provided, e.g., by Asher Peres in his book

A. Peres, Quantum Theory: Concepts and Methods, Kluwer
Academic Publishers, New York, Boston, Dordrecht, London,
Moscow (2002).

What's new is the order of presentation, i.e., it is starting from the most general case of "weak measurements" (described by POVMs) and then brings the standard-textbook notion of idealized von Neumann filter measurements as a special case, and this makes a lot of sense, if you are aiming at a deductive (or even axiomatic) formulation of QT. The only problem seems to be that this view is not what the author wants to express, and I have no idea what the intended understanding is.

Maybe it would help, when a concrete measurement is discussed, e.g., the nowadays standard experiment with single ("heralded") photons (e.g., produced with parametric down conversion using a laser and a BBO crystal, using the idler photon as the "herald" and then doing experiments with the signal photon). In my understanding such a "preparation procedure" determines the state, i.e., the statistical operator in the formalism. Then one can do an experiment, e.g., a Mach-Zender interferometer with polarizers, phase shifters etc. in the two arms and then you have photon detectors to do single-photon measurements. It should be possible to describe such a scenario completely with the formalism proposed in the paper and then pointing out where, in the view of the author, this contradicts the standard statistical interpretation a la Born.
 
Top