Jürg Fröhlich on the deeper meaning of Quantum Mechanics

In summary, the paper by Jürg Fröhlich discusses the problems with the standard formulation of quantum mechanics and its shortcomings. He feels that the subject has better remain a hobby until later in his career, but when he was approaching mandatory retirement he felt an urge to clarify his understanding of some of the subjects he had had to teach to his students for thirty years. The paper presents a completion of QM, the ''ETH-Approach to QM,'' which is too abstract to become popular. Interesting paper.
  • #176
DarMM said:
It always seemed to me if you were going to view the quantum state in a probabilistic way then pure states are states of maximal knowledge rather than the "true state".
For someone who thinks that the state is associated with the observer (a subject) rather than the experiment (an object) there is no true state, only subjective assignments. But for a frequentist, the state contains true information about an ensemble of experiments. Or what else should distinguish the frequentist from the Bayesian?
DarMM said:
In a frequentist approach they'd be ensembles with minimal entropy. Either way they're not ignorance of the true pure state.
Pure and mixed states are different ensembles, representing different statistics (if one could do experiments differentiating the two) and hence different objective realities. Only one of them is real.

For a 2-state system (polarized beams of light) one can easily differentiate between light prepared in an unpolarized state (true density matrix = 1/2 unit matrix) and a completely polarized state (true density matrix of rank 1), and - in the limit of an unrestricted number of experiments - one can find out the true state by quantum tomography.

On the other hand, a consequent Bayesian who doesn't know how the light is prepared and thinks to be entitled by Jaynes or de Finetti to treat his complete lack of knowledge in terms of the natural noninformative prior will assign to both cases the same density matrix (1/2 unit matrix), and will lose millions of dollars in the second case should he bet that much on the resulting statistics.

Thus the correct state of a 2-state system, whether pure or mixed, conveys complete knowledge about the objective information that can possibly be obtained, while any significantly different state will lead to the wrong statistics. This must be part of any orthodoxy that can claim agreement with experiment.

I don't think that anything changes for bigger quantum systems simply because quantum tomography is no longer practically feasible. (The example of interference of quantum systems, which can be shown for larger and larger systems, suggests that there is no ''complexity border'' beyond which the principles change.)

DarMM said:
I'm going back to simpler topics like Constructive Field Theory! :biggrin:
You could instead go forward and solve the mathematical challenges involved in the thermal interpretation! There everything is as well-defined as in Constructive Field Theory but as the subject matter is new, it is not as hard to make significant progress!
 
  • Like
Likes DarMM
Physics news on Phys.org
  • #177
A. Neumaier said:
For a 2-state system (polarized beams of light) one can easily differentiate between light prepared in an unpolarized state (true density matrix = 1/2 unit matrix) and a completely polarized state (true density matrix of rank 1), and - in the limit of an unrestricted number of experiments - one can find out the true state by quantum tomography.

On the other hand, a consequent Bayesian who doesn't know how the light is prepared and thinks to be entitled by Jaynes or de Finetti to treat his complete lack of knowledge in terms of the natural noninformative prior will assign to both cases the same density matrix (1/2 unit matrix), and will lose millions of dollars in the second case should he bet that much on the resulting statistics
An Objective Bayesian isn't too different from a Frequentist here. They think there is a single "best" set of beliefs given the currently observed statistics. A Subjective Bayesian will be permitted any prior initially, but via the representation theorem (a generalization of de Finetti's classical one, there are a few different proofs of this by now) will update toward a different state if the observations do not match their proposed state.

I don't think quantum tomography differs much between the three views as all three are used in the Quantum Information literature. Though the Bayesian views are more common. There's a major paper in Quantum Information on this topic: https://arxiv.org/abs/quant-ph/0104088
 
  • #178
vanhees71 said:
(and I consider the collapse hypothesis as NOT part of the standard interpretation). You know Bohr's papers better than I, but as far as I know, Bohr never emphasized the collapse so much.
Bohr didn't mention the collapse in his published writings (only in an unpublished draft, just once).
But some form of collapse is needed at least in some situations, to be able to know what state to assume after a quantum system passes a filter (such as a slit or a polarizer). This cannot be derived from Born's rule without collapse.

vanhees71 said:
It is true that in introductory textbooks first the ideal case of complete measurements are discussed, i.e., you prepare a system (in the introductory part of textbooks even restricted to pure states) and then measure one or more observables precisely. This is to start with the simplest case to set up the theory. You also do not start with symplectic manifolds, Lie derivatives and all that to teach classical Newtonian mechanics ;-)).

Later you extent the discussion to mixed states and all that.
One could instead start with the simplest case of a 2-state system, a beam of natural light passing through a polarizer and detected by a photocell. It features a density matrix corresponding to a mixed state that collapses to a pure state through the interaction with the filter. Once one has discussed the properties of polarizers one can discuss quantum tomography, and finds an objective notion of a state (if one is a frequentist). Using a little theory as described in my Insight article on the quibit, one can derive the Schrödinger equation, and everything else that matters for a single qubit.

From this single and elementary example one gets mixed states, collapse, Born's rule, and the Schrödinger equation (and if you like, the thermal interpretation) - everything needed for a good and elementary introduction to quantum mechanics, without having to tell a single children's fable.

vanhees71 said:
I shouldn't waste my time anymore to discuss philosophical issues in this forum. It's kind of fighting against religious beliefs rather than having a constructive scientific discussion.
The problem is that in terms of the philosophy of physics you are a religious zealot fighting other religious zealots with a different religion...
A. Neumaier said:
Those like @vanhees71 and Englert (see post #14) , who found an amendment that they personally find consistent and agreeing with their knowledge about the use of quantum mechanics then think they have solved the problem, think of their version as the true orthodoxy and then claim that there is no measurement problem. But these othodoxies are usually mutually incompatible, and are often flawed in points their inventors did not thoroughly inspect for possible problems. This can be seen from how the proponents of some othodoxy speak about the tenets of other orthodoxies that don't conform to their own harmonization.
 
Last edited:
  • #179
DarMM said:
There's a major paper in Quantum Information on this topic: https://arxiv.org/abs/quant-ph/0104088
I'll have a look at it...
DarMM said:
A Subjective Bayesian will be permitted any prior initially, but via the representation theorem (a generalization of de Finetti's classical one, there are a few different proofs of this by now) will update toward a different state if the observations do not match their proposed state.
1. Please tell me what the standard update rule for the mixed state ##\rho## of the 2-state system is when a measurement of a test for a particular polarization state becomes available? I think there is no canonical (optimal) way of making the update; or please correct me.

2. The update does not help when the bet has to be made before further knowledge can be accumulated. A subjective Bayesian will bet (or why shouldn't he, according to the Bayesian paradigm?). A frequentist will acknowledge that he knows nothing and the law of large numbers (on which he relies for his personal approximations to the true state) is not yet applicable. Thus he will not accept any bet.

3. Suppose that the light is prepared using photons on demand (one per second) by a device that rotates the polarizer every second by an angle of ##\alpha=\pi(\sqrt{5}-1)/2##.

The subjective Bayesian, following the recipe for Bayesian state updates to be revealed to me as the answer to 1., will only get random deviations from his initially unpolarized state.
But the frequentist can apply whatever statistical technique he likes to use to form his personal approximation, and can verify the preparation scheme (and then achieve better and better prediction) by an autoregressive analysis combined with a cyclically repeated tomographic scheme that provides the data for the former.
 
  • #180
Are you familiar with de Finetti's representation theorem in the case of classical statistics before I begin an exposition?

To some degree there isn't anything shocking about the quantum case once you know the analogous theorem holds.
 
  • #181
DarMM said:
Are you familiar with de Finetti's theorem in the case of classical statistics before I begin an exposition?

To some degree there isn't anything shocking about the quantum case once you know the analogous theorem holds.
I was familiar with it 20 years ago. But then I lost interest in subjective interpretations, which more and more seemed to me contrived. (A true subjectivist is free to update in any way he likes, but then theory says no longer anything about the temporal fate of the density matrix. Thus we need sort of an objective, optimal, subjectivist. But this means that there is no freedom left - at least not asymptotically. Thus the objective, optimal, subjectivist is sooner or later a frequentist...)

So I no longer recall its contents. (But I'll read the paper you pointed to; you don't need to explain.) For the present discussion I just want an answer to point 1 - an explicit update rule for the density matrix, given the current density matrix, a polarizer setting, and an observation (1 or 0), depending on whether the photon was or wasn't detected.
 
  • #182
A. Neumaier said:
For the present discussion I just want an answer to point 1 - an explicit update rule for the density matrix, given the current density matrix, a polarizer setting, and an observation (1 or 0), depending on whether the photon was or wasn't detected
1.3 in that paper is the basic rule. There's more details later on in the paper.
 
  • #183
DarMM said:
1.3 in that paper is the basic rule. There's more details later on in the paper.
Oh, so the subjective Bayesian describes the quantum system not by a density operator but by a probability distribution on the space of density operators? Thus his beliefs have a much bigger state space than that of quantum mechanics, which is described by single density operators.
 
  • #184
A. Neumaier said:
The update does not help when the bet has to be made before further knowledge can be accumulated. A subjective Bayesian will bet (or why shouldn't he, according to the Bayesian paradigm?
Well it's not as if Subjective Bayesianism is a statement that knowledge doesn't matter and you can bet when you want.

Rather take a horse race with the horses given various probabilities of winning by the bookies (I'm not talking about the odds, but the probabilities the bookie will use prior to offering odds). To Bayesians these probabilties are coherent judgements about the race rather than properties of ensembles of races with those horses. However there is such a thing as knowing more about those horses, there is a world out there! Thus there are better probability assignments. That's why a Bayesian has Bayes rule, it reflects learning more. Not you must bet whenever you want because all probabilities are the same, even uninformed ones.

All three views will agree on the primacy of frequency data as a major way of testing ones assignments.
 
  • #185
DarMM said:
Well it's not as if Subjective Bayesianism is a statement that knowledge doesn't matter and you can bet when you want.

Rather take a horse race with the horses given various probabilities of winning by the bookies (I'm not talking about the odds, but the probabilities the bookie will use prior to offering odds). To Bayesians these probabilities are coherent judgments about the race rather than properties of ensembles of races with those horses. However there is such a thing as knowing more about those horses, there is a world out there! Thus there are better probability assignments.
The same state can be a spurious state on which to bet is foolish, or an informative state on which to bet can earn you a living. Thus the complete knowledge about a real situation would consist of (at least) a state and the assessment how informative the state is, as you need both to be successful at betting. But then not all knowledge can be in the state.

However, in quantum mechanics, the state is claimed to encode all knowledge about the system.
Thus there is an inconsistency...
 
  • #186
A. Neumaier said:
The update does not help when the bet has to be made before further knowledge can be accumulated. A subjective Bayesian will bet (or why shouldn't he, according to the Bayesian paradigm?
Perhaps a better response would be that a Bayesian has probabilities as states of knowledge. Since there is such a thing as "knowing more" there are better states. However that's not in contradiction to the subjective nature of that knowledge.
 
  • #187
DarMM said:
a Bayesian has probabilities as states of knowledge. Since there is such a thing as "knowing more" there are better states.
In ''a state of a classical particle'' or ''a state of a beam of light'', the state says everything about the entity of which it is the state, while in your sentence the word "state" just means ''attribute'', it seems.

Without specifying a clear, unambiguous meaning for the concept of ''knowledge", anything based on it has very unsafe foundations.
 
  • #188
A. Neumaier said:
In ''a state of a classical particle'' or ''a state of a beam of light'', the state says everything about the entity of which it is the state, while in your sentence the word "state" just means ''attribute'', it seems.

Without specifying a clear, unambiguous meaning for the concept of ''knowledge", anything based on it has very unsafe foundations.
I think we're now just back to probability in the foundations.

Although de Finetti does have a decent enough definition I think in terms of coherent numerical beliefs, i.e. ones that can't be Dutch booked. Numerical belief assignments that can't be Dutch booked obey the Kolmogorov axioms and thus one recovers the normal probability axioms.

Coherency even forces the law of large numbers, avoiding Dutch Booking means that if you think event ##E## has probability ##P(E)##, then on repeated trails with ##E## as an outcome you should assign a probability approaching ##1## that in ##N## trials as ##N \rightarrow \infty## the ratio of ##E## events to total events will be roughly ##P(E)##.

I don't see it as completely arbitrary, he does give an axiomatic statement of what he means. It's just that it permits you to update those belief assignments in light of observations. Indeed the Dutch booking gives you Bayes's rule.
 
  • #189
DarMM said:
I think we're now just back to probability in the foundations.
No; in the last few mails we were discussing subjective probability only. Subjective probability replaces the basic notion of probability by the even more problematic basic notion of knowledge, which is a step backwards. Frequentist probability has no such problems; its only problem is that what we can know (in the informal sense) about the true state (the subject of quantum mechanics) is limited in accuracy by the law of large numbers.

DarMM said:
Although de Finetti does have a decent enough definition I think in terms of coherent numerical beliefs
A. Neumaier said:
Thus we need sort of an objective, optimal, subjectivist.
I just found the following here:
Wikipedia said:
In the Brukner–Zeilinger interpretation, a quantum state represents the information that a hypothetical observer in possession of all possible data would have. Put another way, a quantum state belongs in their interpretation to an optimally-informed agent, whereas in QBism, any agent can formulate a state to encode her own expectations.
I don't think that solves much, but at least it is more sensible.

Note that I do not dispute Bayesian probability as a mathematical subject and Bayesian procedures as rules justified for problems of decision making. But they are highly questionable in the foundations of physics.

DarMM said:
Indeed the Dutch booking gives you Bayes's rule.
but only in the form (1.3) in post #182. According to this, knowledge is represented not by a density operator but by a probability distribution on density operators. In terms of degrees of freedom (for a qubit, an infinite-dimensional manifold of states ##P(\rho)## of knowledge), this is heavy overkill compared to the parsimony of quantum mechanics (for a qubit, a 3-dimensional manifold of states ##\rho## of the qubit). Thus most of the subjective Bayesian information to be updated is relevant only for modeling mind processes manipulating knowledge, but irrelevant for encoding physics.

Frequentist probability is unaffected by these problems; its place in the foundation is much more acceptable.
 
  • #190
A. Neumaier said:
but only in the form (1.3) in post #182
That's different. Dutch booking in de Finetti's treatment of probability (see his own monograph or Kaldane's) gives you Bayes's rule for Classical probability in its typical form.

The representation theorem shows that all probability assignments (density matrices in quantum case) have an alternate form (the "representation" to which the theorem's title refers) as a distribution over assignments. The space of states is still the same, e.g. the 3D manifold you mentioned. The alternate form simply shows that one can always think of one's current state as such a distribution and further more show that separate agents with different initial priors can conceive of all their sequence of Bayesian updates as a narrowing distribution over the state of probability assignments. Hence explaining why in a Subjectivist setting they converge to the same results.

The actual state space is not different. It is simply that the alternate representation simply allows a tidy demonstration why subjectivist updating can act like "slowly finding the true state" and why different priors can converge given the same data.
 
  • #191
I don't know, how @A. Neumaier can misunderstand what I wrote in my notes on statistical physics. As he rightfully says, it's in accordance with the standard interpretation, and that's my intention: I don't see any problems with the standard interpretation (which for me is the minimal statistical interpretation).

A system's state is as completely determined as possible according to QT if it is prepared in a pure state. If there is incomplete knowledge about the system one has to describe it with a mixed state, and the problem is, how to choose this mixed state, according to the knowledge about the system at hand, and one objective way is to argue with information theory and the maximum-entropy principle.

I don't see, where there is a contradiction to what I wrote in one of my today's earlier postings. There I explained the well-known standard procedure, if you want to describe a part of a larger system. The answer in all textbooks I know is that you take the partial trace.

Nothing at all contradicts the statements in my manuscript: If you have a big quantum system, this big quantum system can well be completely prepared, i.e., prepared in a pure state and then the part of the system you describe by tracing out the other part(s) of the system according to this rule, is usually in a mixed state. Of course, tracing out the non-wanted part of the big system and describing only one part means to ignore the rest of the system. This means of course that you lose information, and thus the partial system is not in a pure state. Why should it be? The reduced density matrix is the correct choice based on the knowledge we have in this case, which is that the big system is prepared in some pure state but that I choose to ignore parts of the system and only look at one part, of which we have only partial information and thus describe it by a mixed state.

Take the Bohm's spin-1/2 example, the preparation of a spin-1/2 pair in the singlet state (total spin ##S=0##). Then the pair is in the pure state
$$\hat{\rho}=|\Psi \rangle \langle \Psi| \quad \text{with} \quad |\Psi \rangle=\frac{1}{\sqrt{2}} (|1/2,-1/2 \rangle - |-1/2,1/2 \rangle.$$
Tracing out particle 2, i.e., only looking at particle 1, leads to the state for particle 1,
$$\hat{\rho}_1= \mathrm{Tr}_2 \hat{\rho} = \frac{1}{2} (|1/2 \rangle \langle 1/2| + |-1/2 \rangle \langle -1/2|)=\frac{1}{2} \hat{1},$$
i.e., to the state of maximum entropy.

The same of course holds for the reduced stat. op. of particle 2, which is described by
$$\hat{\rho}_2=\mathrm{Tr}_1 \hat{\rho}=\frac{1}{2} \hat{1}.$$

This is in full accordance with my statistics script and (as you claim) with Landau and Lifshitz (I guess, you refer to vol. III, which I consider as one of the better QT books, with somewhat too much overemphasis of wave mechanics, but that's a matter of taste; physicswise it's amazingly up to date given its date of publication; one has just to ignore the usual collapse-hypothesis argument of older QM textbooks ;-))):

You have to distinguish precisely who describes which system and how to associate the statistical operators with the various systems. For the above example you have the following:

(1) An observer Alice, who only measures the spin of particle 1 (you disginguish particle 1 and particle 2 simply by where they are measured; I don't want to make the example to complicated and ignore the spatial part, which however is important when it comes to identical particles in this example). What she shall measure are simply completely unpolarized particles and thus her stat. op. for the spin state is that of maximal entropy, which is ##\hat{\rho}_1## with the maximal possible entropy for a spin 1/2-spin component, ##S_1=\ln 2##.

(2) An observer Bob, who only measures the spin of particle 2. What he shall measure
are simply completely unpolarized particles and thus her stat. op. for the spin state is that of maximal entropy, which is ##\hat{\rho}_2## with the maximal possible entropy for a spin 1/2-spin component, ##S_2=\ln 2##.

(3) Observer Cecil, who knows that the particle pair was produced through the decay of a scalar particle at rest and thus its total spin is ##s=0##. He describes the state of the complete system (consisting of two spins here) by the pure state ##\hat{\rho}##, and thus his knowledge is complete and accoringly the entropy is ##S=0##.

He is the one who knows, without even knowing the measurement results of A and B, that there's a 100% correlation of the two measured spins, namely if A finds ##+1/2##, B must necessarily find ##-1/2## and vice versa. That's independent of the temporal order A and B measure there respective spin and thus there's no causal "action at a distance" of eithers spin measurement on the others particle.

All three description of the situation are thus (a) consistent, (b) there's no non-local action at a distance caused by the local measurement processes of A's and B's spin, (c) there's no contradiction to the statement that A's and B's knowledge prior to their measurement is less complete compared to C's. In this case it's even taken to the extreme that C's knowledge is even complete, i.e., he associates the entropy 0 ("no missing information") to his knowledge, while A and B have the least possible information, and that's what they also will figure out when doing their spin measurements.

This example shows that there are no contradictions within minimally interpreted QT nor between Einstein causality and QT.

The fact that a part of a bigger system prepared completely is not prepared completely, by the way, was Einstein's true quibble with QT, not what's written in this (in)famous EPR paper, which Einstein himself didn't like much, being quite unhappy with Podolsky's formulations when writing it up. He called this feature of quantum theory "inseparability", and that's what's the real profound physical value of this debate: It triggered Bell to develop his famous inequality valid for all local deterministic hidden-variable models and to the empirical conclusion that all these are wrong but QT is right, and Einstein's quibble, the inseparability, is an empirically validated fact.
 
  • Like
Likes DarMM
  • #192
Now there's also this strange idea about "subjective probabilities" in this thread. Whatever this might be, it's not modern quantum theory, which to the contrary (together with information theory) is a method to provide objective probabilities, reflecting precisely what the observers know about the system and not something "subjective" by choosing an inappropriate probability description introducing some bias, that is not justified according to what's known about the system.
 
  • #193
@A. Neumaier see this quote from the paper on p.13:
The upshot of the theorem, as already advertised, is that it makes it possible to think of an exchangeable quantum-state assignment as if it were a probabilistic mixture characterized by a probability density ##P(\rho)## for the product states ##\rho^{\otimes N}##
 
  • #194
vanhees71 said:
Now there's also this strange idea about "subjective probabilities" in this thread. Whatever this might be, it's not modern quantum theory, which to the contrary (together with information theory) is a method to provide objective probabilities, reflecting precisely what the observers know about the system and not something "subjective" by choosing an inappropriate probability description introducing some bias, that is not justified according to what's known about the system.
Well I don't know if it's a "strange idea" simply because it mightn't be useful in modern quantum theory. However it is, since it's just an alternate motivation for statistical tools that you can use regardless of what you think of probability theory. Such an application is here:
https://journals.aps.org/pra/abstract/10.1103/PhysRevA.93.012103
 
  • #195
DarMM said:
That's different. Dutch booking in de Finetti's treatment of probability (see his own monograph or Kaldane's) gives you Bayes's rule for Classical probability in its typical form.
Once one has the rules of probability theory (which any foundation of probability should produce), Bayes rule is a triviality. So why do you claim its derivation through Dutch booking as an asset?

DarMM said:
all their sequence of Bayesian updates as a narrowing distribution over the state of probability assignments.
But this update is still an update analogous to (1.3), and my critique applies, though now to a classical bit: Uncertain knowledge is represented not by a classical density but by a probability distribution on classical densities. In terms of degrees of freedom (for a bit, an infinite-dimensional manifold of states P(p) of knowledge), this is even more heavy overkill compared to the parsimony of uncertain classical mechanics (for a bit, the interval [0,1] of probabilities p of the bit being 1). Thus most of the subjective Bayesian information to be updated is relevant only for modeling mind processes manipulating knowledge, but irrelevant for encoding physics.
DarMM said:
The actual state space is not different.
Then please answer again my question in 1. of post #179, in terms of the actual state space. if the knowledge is ##\rho##, how is it updated when a new measurement result comes in? What is the updated ##\rho##?
vanhees71 said:
Now there's also this strange idea about "subjective probabilities" in this thread. Whatever this might be, it's not modern quantum theory, which to the contrary (together with information theory) is a method to provide objective probabilities, reflecting precisely what the observers know about the system and not something "subjective" by choosing an inappropriate probability description introducing some bias, that is not justified according to what's known about the system.
Well, we are discussing here (in the whole thread) various interpretations of quantum mechanics, and some of them are based on subjective probability. I find it strange, too, but one cannot usually discuss other interpretations by casting them in ones own differing interpretation without losing important features - one must use the language in which they describe themselves.
 
  • #196
A. Neumaier said:
Or what else should distinguish the frequentist from the Bayesian?
In the context of statistics, these are two different approaches to inference. In hypothesis (or theory for Karl Popper) testing, the frequentist statistician computes a p value, which is Pr( data|H0 ) (e.g probabilities of events according to a certain theory), but the Bayesian statistician computes Pr( H0|Data ) (e.g probabilities of the theories in view of certain events).

https://www.austincc.edu/mparker/stat/nov04/talk_nov04.pdf
244593


/Patrick
 
  • #197
A. Neumaier said:
Once one has the rules of probability theory (which any foundation of probability should produce), Bayes rule is a triviality. So why do you claim its derivation through Dutch booking as an asset?
When did I claim that? It's how de Finetti does it, I'm not sure what I would mean to say it's an asset, but it's necessary. It's how this approach derives it, it's not "better" though if that's what "asset" is meant to mean. I think how he derives it "neat" as in the proof is a nice way to look at it, but that's about it.

A. Neumaier said:
But this update is still an update analogous to (1.3), and my critique applies, though now to a classical bit: Uncertain knowledge is represented not by a classical density but by a probability distribution on classical densities
No. Uncertain knowledge is represented by a classical density as it is always. However one's uncertain knowledge for ##N## sequences, which is also a classical density, can be shown to be equivalent to a Probability distribution over classical densities. Via this alternate representation one can demonstrate convergence from different starting priors given the same data for large ##N##.

It's an alternate form used to prove that in Subjective Bayesianism people with the same large set of data will tend towards agreement. It's not what a classical probability assignment is in Subjective Bayesianism.

A. Neumaier said:
Then please answer again my question in 1. of post #179, in terms of the actual state space. if the knowledge is ##\rho##, how is it updated when a new measurement result comes in? What is the updated ##\rho##?
I should have answered this better. The form given in (1.3) is the representation that allows one to show the regular form of updating used in quantum tomography is valid.
 
  • #198
DarMM said:
When did I claim that?
DarMM said:
he does give an axiomatic statement of what he means. It's just that it permits you to update those belief assignments in light of observations. Indeed the Dutch booking gives you Bayes's rule.
But never mind, it is not a critical issue.
A. Neumaier said:
Then please answer again my question in 1. of post #179, in terms of the actual state space. if the knowledge is ##\rho##, how is it updated when a new measurement result comes in? What is the updated ##\rho##?
DarMM said:
I should have answered this better. The form given in (1.3) is the representation that allows one to show the regular form of updating used in quantum tomography is valid.
This still leaves me completely in the dark. Suppose that I want to program a subjective Bayesian observer and assign him as prior state for a particular stationary qubit source the state ##\rho##. Now my robot observer tests the qubit for being up, and gets a positive result. As a subjective Bayesian, what should be the robot's updated state ##\rho'## in the light of the new information gathered?

You had objected to my suggestion that a subjective Bayesian could update arbitrarily. So how should my robot update rationally? I need an explicit formula to be able to program it, not an abstract theory that produces meta results about Bayesian consistency. Please help me.
 
  • #199
A. Neumaier said:
But never mind, it is not a critical issue
Sorry I don't understand, where am I saying it's an asset? I'm just saying (in Subjective Bayesianism) Dutch booking provides you with Bayes's theorem, i.e. it's the method of its derivation. Am I misunderstanding the English word "asset"? :confused:

A. Neumaier said:
You had objected to my suggestion that a subjective Bayesian could update arbitrarily. So how should my robot update rationally? I need an explicit formula to be able to program it, not an abstract theory that produces meta results about Bayesian consistency. Please help me.
Lüders rule in the simple case of iterated measurements not using POVMs.
 
  • #200
DarMM said:
Lüders rule in the simple case of iterated measurements not using POVMs.
Lüder's rule does not apply here; it is not about updating a poor prior state for the source but about finding the state prepared after passing the test given that the state of the source is already fully known,

But the robot uses destructive tests on qubits sequentially emitted by the source, just to learn (as in quantum tomography) about the state prepared by the source. I want to know how the robot should modify his subjective density matrix in the light of the result of a single destructive test, in order to improve it, in such a way that by repeating the procedure sufficiently often it predicts better and better approximations of the observed statistics.
 
  • #201
DarMM said:
Am I misunderstanding the English word "asset"? :confused:
No. I misunderstood your intentions. Forget it.
 
  • #202
Sorry I misunderstood the example you gave. The point is regardless of the example the state update rules are just those used in quantum tomography in practice.

In this case, if I have the example right, it's the usual measurements to determine the Stokes parameters, just reinterpreted. If I have your example wrong can you say what is the typical way this is done I can check.
 
  • #203
DarMM said:
regardless of the example the state update rules are just those used in quantum tomography in practice.

In this case, if I have the example right, it's the usual measurements to determine the Stokes parameters, just reinterpreted. If I have your example wrong can you say what is the typical way this is done I can check.
This only shifts the problem. Given a prior for the Stokes vector, how is it updated when a new measurement comes in?
Quantum tomography does no updating. It estimates from scratch the expectations of three test operators, and that's it.
It does not tell you how to modify a subjective Stokes vector in a rational manner when one test result of an arbitrary test becomes known.
 
  • #204
DarMM said:
Well I don't know if it's a "strange idea" simply because it mightn't be useful in modern quantum theory. However it is, since it's just an alternate motivation for statistical tools that you can use regardless of what you think of probability theory. Such an application is here:
https://journals.aps.org/pra/abstract/10.1103/PhysRevA.93.012103
The only thing I think about probability theory, successfully applied in statistics as well as theoretical physics for about 150+/- years, is that it works. That's all I need to justify the use of any specific mathematical concept in the natural sciences.
 
  • #205
A. Neumaier said:
Well, we are discussing here (in the whole thread) various interpretations of quantum mechanics, and some of them are based on subjective probability. I find it strange, too, but one cannot usually discuss other interpretations by casting them in ones own differing interpretation without losing important features - one must use the language in which they describe themselves.
Yes, and my very point is that it doesn't make sense to introduce more and more abstruse and esoterical "concepts" to clarify the meaning of Q(F)T.

This is a science forum (at least that's what I thought) and not about philosophy (even not about philosophy of science). However the QM-section more and more is deformed to a discussion forum about this off-topic subject, and I find this a pity. Particularly even threads where a student asks some scientific question about introductory QM it's soon turned to discussions about some quibbles with the standard minimal interpretation.

I still think, and I hope finally the mentors here agree, that one should split the QM section into a strictly scientific part, where standard QM is discussed and another philosophy-of-science part, where all these speculations about apparent problems, which are in fact pseudo-problems, are discussed without confusing people interested in science rather than cargo cult!
 
  • Like
Likes Mentz114
  • #206
A. Neumaier said:
Well, we are discussing here (in the whole thread) various interpretations of quantum mechanics, and some of them are based on subjective probability.
vanhees71 said:
This is a science forum (at least that's what I thought) and not about philosophy (even not about philosophy of science).
Subjective probability is discussed even in theoretical books about probability theory, such as the one by Whittle (who primarily gives an exposition of the frequentist view), and is discussed at length in quite a number of books on Bayesian statistics, relevant for real data processing, even in physics.
vanhees71 said:
Particularly even threads where a student asks some scientific question about introductory QM
This thread is explicitly about ''the deeper meaning of quantum mechanics'', so you shouldn't complain in this thread.
vanhees71 said:
one should split the QM section into a strictly scientific part, where standard QM is discussed and another philosophy-of-science part
This split is as ill-defined as the Heisenberg cut - different people place it differently. Like everywhere in discussion forums, the controversial issues take the most space, but are for most readers and contributors also the most interesting ones.

In the last few years, my main motivation to discuss on PF (and alongside also contribute information to other topics of secondary interest to me) was that here one can sensibly discuss foundational questions. While some of the discussion repeats too often without presenting new aspects, I find those threads where I continue to contribute for the most part really informative. I simply stop watching and contributing to the ones that degenerate - you could easily do the same.

Without these foundational discussions I'd have little incentive to spend time on PF, and would also not contribute to other quantum physics topics.
 
  • Like
Likes eloheim, mattt, Auto-Didact and 1 other person
  • #207
[/QUOTE]
vanhees71 said:
Yes, and my very point is that it doesn't make sense to introduce more and more abstruse and esoterical "concepts" to clarify the meaning of Q(F)T.

I disagree that anyone is being abstruse or esoterical. And I disagree with your labeling of discussions as "philosophical". I think that the discussions are physics, not philosophy.
 
  • Like
Likes mattt and Auto-Didact
  • #208
The discussion of "subjective" vs. "objective" probbilities IS esoterical. Probability is a clear defined mathematical concept with clear applications in terms of statistics. Completed by information theory and QT it provides objective assignments of probability distributions of real-world systems.
 
  • #209
A. Neumaier said:
Without these foundational discussions I'd have little incentive to spend time on PF, and would also not contribute to other quantum physics topics.
What's the problem discussing this simply in another subforum?
 
  • #210
vanhees71 said:
Probability is a clear defined mathematical concept with clear applications in terms of statistics
The professional statisticians, even very applied ones, fall about half/half into objective (primarily frequentist) and subjective (primarily Bayesian) schools, using and recommending different analysis procedures based on the differences in the underlying understanding of probability.
vanhees71 said:
What's the problem discussing this simply in another subforum?
I mainly discuss to learn, not to contribute. Probably I would look only very rarely at the quantum physics forum (as I do now with the other forums) and hence not contribute my knowledge there.

What's your problem with simply ignoring threads about foundations (i.e., what you label philosophy)?
 
  • Like
Likes eloheim and Auto-Didact

Similar threads

  • Quantum Interpretations and Foundations
11
Replies
376
Views
10K
Replies
6
Views
1K
  • Quantum Physics
Replies
31
Views
4K
  • Quantum Physics
2
Replies
69
Views
4K
  • Quantum Interpretations and Foundations
Replies
25
Views
995
  • Quantum Interpretations and Foundations
2
Replies
61
Views
4K
Replies
17
Views
4K
  • Quantum Interpretations and Foundations
Replies
2
Views
1K
Replies
23
Views
6K
Back
Top