A Jürg Fröhlich on the deeper meaning of Quantum Mechanics

  • #51
stevendaryl said:
if you treat the measuring device (plus observer plus the environment plus whatever else is involved) as a quantum mechanical system that evolves under unitary evolution

Then you are saying that no measurement occurred. That removes the contradiction; in its place is simply a choice of whether or not to treat the system as if a measurement occurred.

The issue with the minimal interpretation is that there is no rule that tells you when a measurement occurs. In practice the rule is that you treat measurements as having occurred whenever you have to to match the data. So in your example, since nobody actually observes observers to be in superpositions of pointer states, and observers always observe definite results, in practice we always treat measurements as having occurred by the time an observer observes a result.
 
  • Like
Likes dextercioby and vanhees71
Physics news on Phys.org
  • #52
vanhees71 said:
There's no single proof of non-unitariness. In some sense one can even say that everyday experience (validity of thermadynamics) tells the opposite: unitarity ensures the validity of the principle of deatailed balance.
I don't agree. My problem is irreversibility, which is demanded of the measurement by the purists but is unobtainable with unitary evolution.
 
  • Like
Likes Auto-Didact
  • #53
vanhees71 said:
In other words your problem is that you don't want to accept the probabilistic nature of the quantum description

No, the problem is you refuse to consider the time evolution of the measuring device itself as the unitary evolution of a quantum system. But this is the only thing that makes sense, since the device is made of electrons and nucleons, which everyone agrees are quantum systems.

You are implicitly dividing the world in two, where the meaning of quantum systems are defined only by the probabilistic responses they trigger in classical devices, which you independently assume to already exist. But there is no sensible way to explain how these classical devices can ever come to exist in the first place.
 
  • Like
Likes eloheim, Auto-Didact and dextercioby
  • #54
This is basically just a discussion over what's going on in Wigner's friend right?

Would be interesting to see how it works out in Fröhlich's view since he doesn't have observers in the usual sense. I think he'd just have his commutation condition determine when the measurement event has occurred in an objective sense.
 
Last edited:
  • #55
vanhees71 said:
In other words your problem is that you don't want to accept the probabilistic nature of the quantum description.

There is nothing I said that suggests that, and it's not true. That's ignoring what I actually said, and pretending that I said something different, that you have a prepared response for.
 
  • Like
Likes Auto-Didact
  • #56
The issue with quantum mechanics is that it is NOT a probabilistic theory, until you specify a basis. Then you can compute probabilities using the Born rule. But what determines which basis is relevant?

The minimal interpretation says it's whichever basis corresponds to the observable being measured. But what does it mean that a variable is being measured? It means, ultimately, that the interaction between the system being measured and the measuring device is such that values of the variable become correlated with macroscopic "pointer variables".

So, the minimal interpretation ultimately gives a preference to macroscopic quantities over other variables, but this preference is obfuscated by the use of the word "measurement". The inconsistency is that if you treat the macroscopic system as a huge quantum mechanical system, then no measurement will have taken place at all. The macroscopic system (plus the environment, and maybe the rest of the universe) will not evolve into a definite pointer state.

So depending on whether you consider a macroscopic interaction a measurement or not leads to different results. That's an inconsistency in the formalism. The inconsistency can be resolved in an ad hoc manner by just declaring that macroscopic systems are to be treated differently than microscopic systems, but there is no support for this in the minimal theory. The minimal theory does not in any way specify that there is a limit to the size of system that can be analyzed using quantum mechanics and unitary evolution.
 
  • Like
Likes eloheim, Auto-Didact, mattt and 1 other person
  • #57
charters said:
You are implicitly dividing the world in two, where the meaning of quantum systems are defined only by the probabilistic responses they trigger in classical devices, which you independently assume to already exist. But there is no sensible way to explain how these classical devices can ever come to exist in the first place.

That's exactly right. The minimal interpretation requires two contradictory things: (1) that any system composed of quantum mechanical particles and fields, no matter how large, evolves unitarily according to the Schrodinger equation, and (2) macroscopic measurement devices are treated as always having definite values for "pointer variables" (the results of measurements). These two are contradictions.
 
  • Like
Likes eloheim and Auto-Didact
  • #58
stevendaryl said:
That's exactly right. The minimal interpretation requires two contradictory things: (1) that any system composed of quantum mechanical particles and fields, no matter how large, evolves unitarily according to the Schrodinger equation, and (2) macroscopic measurement devices are treated as always having definite values for "pointer variables" (the results of measurements). These two are contradictions.
What's the contradiction if one understands the quantum state probabilistically? This exact issue appears in Spekkens model where the resolution is clear. I don't understand what is different about QM that makes this a contradiction.
 
  • #59
vanhees71 said:
..... since a measurement result comes about through interactions of the measured system with the measurement device.

Here I disagree. In Renninger-type of measurements the “reduction” of the wave function is accomplished without any physical interaction. As Nick Herbert writes in “Quantum Reality: Beyond the New Physics”:

The existence of measurements in which “nothing happens” (Renninger-style measurement), where knowledge is gained by the absence of a detection, is also difficult to reconcile with the view that irreversible acts cause quantum jumps. In a Renninger-style measurement, there must always be the “possibility of an irreversible act” (a detector must actually be present in the null channel), but this detector does not click during the actual measurement. If we take seriously the notion that irreversible acts collapse the wave function, Renninger measurements require us to believe that the mere possibility of an irreversible act is sufficient to bring about a quantum jump. The fact that such “interactionless” measurements are possible means that the wave function collapse cannot be identified with some specific random process occurring inside a measuring device.
 
  • #60
Lord Jestocost said:
Here I disagree. In Renninger-type of measurements the “reduction” of the wave function is accomplished without any physical interaction. As Nick Herbert writes in “Quantum Reality: Beyond the New Physics”:

The existence of measurements in which “nothing happens” (Renninger-style measurement), where knowledge is gained by the absence of a detection, is also difficult to reconcile with the view that irreversible acts cause quantum jumps. In a Renninger-style measurement, there must always be the “possibility of an irreversible act” (a detector must actually be present in the null channel), but this detector does not click during the actual measurement. If we take seriously the notion that irreversible acts collapse the wave function, Renninger measurements require us to believe that the mere possibility of an irreversible act is sufficient to bring about a quantum jump. The fact that such “interactionless” measurements are possible means that the wave function collapse cannot be identified with some specific random process occurring inside a measuring device.

The irreversibility is not in the system being measured, but in the system doing the measuring. Any time knowledge is gained, that means that the system doing the measuring has been irreversibly changed.
 
  • #61
Mentz114 said:
I don't agree. My problem is irreversibility, which is demanded of the measurement by the purists but is unobtainable with unitary evolution.
The irreversibility comes into physics through coarse graining. Also in classical physics there's no irreversibility on the fundamental level. Of course, for philosophers, also this opens a can of worms (or even Pandora's box if you wish). There are debates about this even longer than there are debates about QT. From the physics point of view there's no problem. To the contrary it's well understood, and the "arrow of time" comes into physics as a basic postulate in the sense of the "causal arrow of time". As any fundamental assumption/postulate/axiom, however you want to call it, in the edifice of theoretical physics it cannot be proven but it's assumed based on experience, and this is the most fundamental experience of all: That there are "natural laws" which can be described mathematically, and also about this you can build a lot of mysteries and philosophies of all kinds. From a physics point of view that's all irrelevant, but perhaps nice for your amusement in the sense of fairy tales.

The point with unitarity is that it guarantees that the "thermodynamical arrow of time" is inevitable consistent with the "causal arrow of time", and this is not a fundamental law but can be derived from the assumption of a causal arrrow of time and unitarity of the time evolution of closed quantum systems. With the thermodynamical arrow of time also irreversibility is well determined, i.e., the fact that entropy has increased. Also note that also the entropy depends on the level of description or coarse graining. It's measuring the missing information, given the level of description, relative to what's defined as "complete knowledge".
 
  • Informative
Likes Mentz114
  • #62
charters said:
No, the problem is you refuse to consider the time evolution of the measuring device itself as the unitary evolution of a quantum system. But this is the only thing that makes sense, since the device is made of electrons and nucleons, which everyone agrees are quantum systems.

You are implicitly dividing the world in two, where the meaning of quantum systems are defined only by the probabilistic responses they trigger in classical devices, which you independently assume to already exist. But there is no sensible way to explain how these classical devices can ever come to exist in the first place.
Why should I describe a measurement device like this? Do you describe a rocket flying to the moon as a quantum system? I don't believe that this makes much sense. That's why we use the adequately reduced (coarse grained) description for measurement devices or rockets: It's because it's impossible to describe the microstate of a macroscopic system (despite the rare cases, where it is in a simple enough state like some systems close to 0 temperature like liquid He or a superconductor etc.). As it turns out the effective quantum description of macroscopic systems almost always leads to behavior of the relevant macroscopic degrees fo freedom as described by classical physics (Newton's Laws of motion including of gravity for the moon rocket).
 
  • #63
Lord Jestocost said:
Here I disagree. In Renninger-type of measurements the “reduction” of the wave function is accomplished without any physical interaction. As Nick Herbert writes in “Quantum Reality: Beyond the New Physics”:

The existence of measurements in which “nothing happens” (Renninger-style measurement), where knowledge is gained by the absence of a detection, is also difficult to reconcile with the view that irreversible acts cause quantum jumps. In a Renninger-style measurement, there must always be the “possibility of an irreversible act” (a detector must actually be present in the null channel), but this detector does not click during the actual measurement. If we take seriously the notion that irreversible acts collapse the wave function, Renninger measurements require us to believe that the mere possibility of an irreversible act is sufficient to bring about a quantum jump. The fact that such “interactionless” measurements are possible means that the wave function collapse cannot be identified with some specific random process occurring inside a measuring device.
Can you give a concrete example of a real-world experiment, where a measurement occurs without interaction of something measured with some measurement device? I'd say, if the system doesn't interact with the measurement device there cannot be a measurement to begin with. I've no clue what a "Renninger-style measurement" might be.
 
  • #64
Lord Jestocost said:
In a Renninger-style measurement, there must always be the “possibility of an irreversible act” (a detector must actually be present in the null channel), but this detector does not click during the actual measurement. If we take seriously the notion that irreversible acts collapse the wave function, Renninger measurements require us to believe that the mere possibility of an irreversible act is sufficient to bring about a quantum jump. The fact that such “interactionless” measurements
These are not interactionless - a null measurement is obtained, and as a consequence the state collapses (though not necessarily to an eigenstate).
 
  • #65
vanhees71 said:
Do you describe a rocket flying to the moon as a quantum system? I don't believe that this makes much sense.
Do you want to imply that a rocket flying to the moon is not a quantum system? What then is the size where a system loses its describability as a qauntum system?
vanhees71 said:
That's why we use the adequately reduced (coarse grained) description for measurement devices or rockets
Already the possibility of a reduced description requires that there is a theoretically possible, though unknown complete description, which we can reduce by coarse graining.
 
  • Like
Likes eloheim, Auto-Didact and dextercioby
  • #66
vanhees71 said:
Can you give a concrete example of a real-world experiment...

As far as I know, Renninger-type of measurements are thought experiments, see, for example:
Towards a Nonlinear Quantum Physics
 
  • #67
bob012345 said:
Feynman said nobody understands Quantum Mechanics. I think that's even more true today. I think it was Dirac who famously said something paraphrased as "shut up and calculate".

Much confusion arises additionally when one doesn’t recognize that the “objects” which are addressed by quantum theory (QT) are - in a scientific sense - fundamentally different from the “objects” which are addressed by classical physical theories. As pointed out by James Jeans in his book “PHYSICS & PHILOSOPY” (1948):

Complete objectivity can only be regained by treating observer and observed as parts of a single system; these must now be supposed to constitute an indivisible whole, which we must now identify with nature, the object of our studies. It now appears that this does not consist of something we perceive, but of our perceptions, it is not the object of the subject-object relation, but the relation itself. But it is only in the small-scale world of atoms and electrons that this new development makes any appreciable difference; our study of the man-sized world can go on as before.

QT deals with the temporal and spatial patterns of events which we perceive to occur on a space-time scene, our “empirical reality”. QT makes no mention of “deep reality” behind the scene, so QT cannot be the point of contact if one wants to know or to explain “what is really going on.”
 
  • #68
A. Neumaier said:
Do you want to imply that a rocket flying to the moon is not a quantum system? What then is the size where a system loses its describability as a qauntum system?

Already the possibility of a reduced description requires that there is a theoretically possible, though unknown complete description, which we can reduce by coarse graining.
Of course not, as I said in the paragraph later, the classical behavior of the relevant macroscopic degrees of freedom is well understood by the usual coarse-graining procedures from quantum-many-body theory. As far as matter is concerned everything on the basic level is described by relativistic QFT. However, you cannot describe all systems in all microscopic details, and thus one has to make approximations and find effective theories to describe the system at a level at which it can be described, and this can be in various ways. E.g., bound-state problems are usually treated in non-relativistic approximations, whenever this is possible, because it's much simpler than the relativistic description. Then, at the macroscopic level one describes systems by classical physics, because that covers everything that's relevant at this level of description.
 
  • #69
Lord Jestocost said:
As far as I know, Renninger-type of measurements are thought experiments, see, for example:
Towards a Nonlinear Quantum Physics
I don't have this book, but obviously it's not about quantum theory but some extension, I cannot judge, what it is supposed to solve or "correct" on standard quantum mechanics. As far as I could read at google, no valid mathematical description of the described experiment was made, but it's clear that ##D_1## in any case is another obstacle in the way of the particle, with which it interacts, and has thus to be taken into account to describe the system completely. It's a contradiction in itself to assume ##D_1## is an detector and the particles to be measured are not interacting with it at the same time. Even if ##D_1## doesn't give a signal, the particle may still interact with it. To understand the probability that ##D_1## gives a signal or not as well as for ##D_2## has to be analyzed in detail. Usually, there's some non-zero probability for a particle not to be detected at all, depending on the specific setup of the detector(s).
 
  • #70
vanhees71 said:
Of course not, as I said in the paragraph later, the classical behavior of the relevant macroscopic degrees of freedom is well understood by the usual coarse-graining procedures from quantum-many-body theory. As far as matter is concerned everything on the basic level is described by relativistic QFT. However, you cannot describe all systems in all microscopic details
One cannot in practice. But foundations are about the principles, not the practice. All questions of interpretation concern the principles. There one has a single huge quantum system consisting of a tiny measured system, a macroscopic detector, and maybe a heat bath, and wants to understand how the principles lead to unique outcomes (for example) - the measurement results of your coarse grained description.

You answer all these foundational questions by substituting the in principle existing (though unknown) description of this large quantum system by a classical description - just as Bohr did. The foundational problems are then associated with this change of description, where you just say that it works and hence is fine, but people interested in the foundations want to understand why.
 
  • Like
Likes eloheim, Auto-Didact and dextercioby
  • #71
Good luck. I don't think that you can ever achieve this for any physical theory which describes nature. It's simply to complicated.

I never understood what Bohr precisely wanted to say, because of his too philosophical enigmatic writing style, but where for sure he is right with is that QT as a description of what's observed in nature is about the observations done finally with macroscopic measurement devices and that their workings are well-enough understood within classical physics. The validity of classical physics for macroscopic systems, as well as quantum theory (in fact any physical theory) is seen from comparison to experiment and observation. I think the paper by Englert is brilliant, cleaning up all the superfluous philosophical balast of "solving" some philosophical pseudo problems that don't have anything to do with physics nor will most probably have any merit in leading to new better theories.
 
  • Like
Likes Lord Jestocost
  • #72
vanhees71 said:
QT as a description of what's observed in nature is about the observations done finally with macroscopic measurement devices and that their workings are well-enough understood within classical physics.
Thus you now endorse the Heisenberg cut (between a quantum system and classically treated detector results), for which you never before saw any use...
 
  • Like
Likes Auto-Didact
  • #73
vanhees71 said:
Good luck. I don't think that you can ever achieve this for any physical theory which describes nature. It's simply to complicated.
...
It seems that a general theory is impossible. It can be done for some simple cases but each case has to be handled individually. The big problem is probability amplitude which cannot be physical as it stands for dimensional reasons. How can there ever be a mapping from amplitude space to (say) pressure or any (thermo)dynamical variable ?
 
  • #74
@vanhees71

To me it’s merely astonishing that Renninger-type of measurements seem to be in every way equivalent to measurements in which something seems “actually” to happen. An english translation by W. De Baere of Renninger’s paper “Zum Wellen–Korpuskel–Dualismus” (Zeitschrift für Physik 136, 251-261 (1953)) can be found here: https://arxiv.org/abs/physics/0504043
 
  • #75
Mentz114 said:
It seems that a general theory is impossible. It can be done for some simple cases but each case has to be handled individually. The big problem is probability amplitude which cannot be physical as it stands for dimensional reasons. How can there ever be a mapping from amplitude space to (say) pressure or any (thermo)dynamical variable ?
I don't understand anything of this? Why should there be a problem with a "probability amplitude" for dimensional reasons? Of course the dimension of the probability and thus also the "probability amplitude" depends on for which (continuous) quantity it is given. A distribution transforms as a distribution, which is why mathematically it's called a distribution. E.g., in position representation the "probability amplitude" (usually simply called wave function), ##|\psi(\vec{x})##, of a single particle has dimension ##1/\text{length}^{3/2}##. No problem whatsoever.

I've no clue what you mean concerning thermodynamics. Quantum statistics is well-defined, and all the thermodynamical quantities you state are just thermodynamical quantities. What should they have to do with "amplitude space" (whatever this means)?
 
  • Informative
Likes Mentz114
  • #76
Lord Jestocost said:
@vanhees71

To me it’s merely astonishing that Renninger-type of measurements seem to be in every way equivalent to measurements in which something seems “actually” to happen. An english translation by W. De Baere of Renninger’s paper “Zum Wellen–Korpuskel–Dualismus” (Zeitschrift für Physik 136, 251-261 (1953)) can be found here: https://arxiv.org/abs/physics/0504043
I've looked at the German original because I thought there must be errors in the translation. To my astonishment that's not the case. I'm puzzled about the fact that such a paper could ever appear in a serious physics journal as "Zeitschrift für Physik". Nothing about "photons" he says makes any sense, nor has ever such things as a path of a single photon or a guiding wave been observed. Despite his claim no convincing pilot-wave theory a la de Broglie and Bohm has been formulated for photons nor relativistic particles in general.
 
  • #77
vanhees71 said:
... of a single particle has dimension ##1/\text{length}^{3/2}##...
How many furlongs is that ?
 
  • Like
Likes Auto-Didact
  • #78
Mentz114 said:
How many furlongs is that ?
Eight furlongs per keel
 
  • Like
Likes Auto-Didact and Mentz114
  • #79
DarMM said:
so quantum observables are as much a property of the device as the quantum system itself.
Yep. It seems to me that :

The measurement is a projection into the base defined by the measuring instrument. We measure a trace left by the system on the measuring device that makes sense to our consciousness as human observers (through the mediation of our brains).

/Patrick
 
  • #80
PeterDonis said:
The issue with the minimal interpretation is that there is no rule that tells you when a measurement occurs. In practice the rule is that you treat measurements as having occurred whenever you have to to match the data. So in your example, since nobody actually observes observers to be in superpositions of pointer states, and observers always observe definite results, in practice we always treat measurements as having occurred by the time an observer observes a result.
stevendaryl said:
So, the minimal interpretation ultimately gives a preference to macroscopic quantities over other variables, but this preference is obfuscated by the use of the word "measurement". The inconsistency is that if you treat the macroscopic system as a huge quantum mechanical system, then no measurement will have taken place at all. The macroscopic system (plus the environment, and maybe the rest of the universe) will not evolve into a definite pointer state.
Is all of this not just a problem related to QM being a probabilistic theory?

For example if I model a classical system such as a gas using statistical methods like Liouville evolution, I start with an initial state on the system ##\rho_0## and it evolves into a later one ##\rho_t##. Nothing in the formalism will tell me when a measurement occurs to allow me to reduce ##\rho## to a tighter state with smaller support (i.e. Bayesian updating). Just as nothing in a probabilistic model of a dice will tell me when to reduce the uniform distribution over outcomes down to a single outcome. Nothing says what a measurement is.

Similarly one could zoom out to a larger agent, who uses a distribution not only over the gas from the first example but also over the state space of the device used to measure it (staying within classical mechanics for now). His ##P## distribution will evolve under Liouville's equation to involve multiple detection states for the device, in contrast to my case where the device lies outside the probability model and is used to learn of an outcome.

Any probability model contains the notion of an "agent" who "measures/learns" the value of something. These ideas are primitives unexplained in probability theory (i.e. what "causes" Bayesian updating). Any "zoomed out" agent placing my devices within their probability model will not consider them to have an outcome when I do until they themselves "look".

So to me all of this is replicated in Classical probability models. It's not a privileged notion of macroscopic systems, but unexplained primitive notions of "agent" and "learning/updating" common to all probability models. Introduce an epistemic limit in the Classical case and it becomes even more similar to QM with non-commutativity, no cloning of pure states, superdense coding, entanglement monogamy, Wigner's friend being mathematically identical to the quantum case, etc

The major difference between QM and a classical probability model is the fact that any mixed state has a purification on a larger system, i.e. less than maximal knowledge of a system ##A## can always be seen as being induced by maximal knowledge on a larger system ##B## containing ##A## (D'Ariano, Chiribella, Perinotti axioms). This is essentially what is occurring in Wigner's friend. Wigner has a mixed state for his friend's experimental device because he has maximal possible knowledge (a pure state) for the Lab as a whole. The friend does not track the lab as a whole and thus he can have maximal knowledge (a pure state) for the device.

So as long as QM is based on probability theory viewed in the usual way you will always have these odd notions of "when does a measurement occur/when do I update my probabilities" and "I consider event ##A## to have occured, but somebody else might not". You could say this is a discomfort from having probability as a fundamental notion in your theory.

If one wishes a way out of this would be @A. Neumaier 's view where he reads the formalism differently and not in the conventional statistical manner.
 
Last edited:
  • Like
Likes dextercioby, akvadrako, vanhees71 and 2 others
  • #81
DarMM said:
So to me all of this is replicated in Classical probability models. It's not a privileged notion of macroscopic systems, but unexplained primitive notions of "agent" and "learning/updating" common to all probability models.
Only in classical subjective (Bayesian) probability. Frequentist interpretations have neither such notions nor the associated problems.
 
  • Like
Likes Auto-Didact
  • #82
A. Neumaier said:
Only in classical subjective (Bayesian) probability. Frequentist interpretations have neither such notions nor the associated problems.
Do you mean Subjective Bayesianism (e.g. de Finetti) or are you using "Subjective" to denote Bayesianism in general?
 
Last edited:
  • Like
Likes Auto-Didact
  • #83
DarMM said:
Do you mean Subjective Bayesianism (e.g. DeFinetti) or are you using "Subjective" to denote Bayesianism in general?
I call a probability theory subjective if the probabilities depend on an ill-defined agent (rather than on objective contexts only). Bayesian probability seems to me a synonym for that.
 
  • #84
A. Neumaier said:
Only in classical subjective (Bayesian) probability. Frequentist interpretations have neither such notions nor the associated problems.
They still have Bayesian updating and relativism of when that updating occurs. For example in the case of the classical gas in the actual model used by the observers and superobservers the distributions used have the same behavior regardless of what view one holds of probability theory.

My post uses Bayesian language, but even in the frequentist case the superobserver will continue to use a mixture over outcomes where the observer will not. Up until he views the system. That's just a feature of probability theory.

You'll still have the notion of what you don't include in the probability side of your model and updating/conditioning. I don't see what is different in the sense relevant here.

Basically you can still replicate Wigner's friend even under a frequentist view.
 
  • #85
A. Neumaier said:
I call a probability theory subjective if the probabilities depend on an ill-defined agent (rather than on objective contexts only). Bayesian probability seems to me a synonym for that.
That's not the conventional usage though right? There is Objective Bayesianism. Subjective is usually reserved for views like de Finetti and Savage.
 
Last edited:
  • #86
DarMM said:
They still have Bayesian updating and relativism of when that updating occurs.
Not really. Frequentists just have conditional probability, i.e., probabilities relative to a subensemble of the original ensemble. Nobody is choosing or updating anything; it never occurs.
DarMM said:
even in the frequentist case the superobserver
Neither are there observers or superobservers. I have never seen the notion of superobservers in classical probability of any kind.

All subjective updating happens outside probability theory when some subject wants to estimate the true probabilities about which the theory is.
DarMM said:
Basically you can still replicate Wigner's friend even under a frequentist view.
No, because both Wigner and his friend only entertain subjective approximations of the objective situation. Subjectively everything is allowed. Even logical faults are subjectively permissible (and happen in real subjects quite frequently).
 
  • #87
DarMM said:
That's not the conventional usage though right? There is Objective Bayesianism. Subjective is usually reserved for views like DeFinetti and Savage.
What is Objective Bayesianism? Where is an authoritative classification? I am not an expert in classifying interpretations...
 
  • #88
A. Neumaier said:
Not really. Frequentists just have conditional probability, i.e., probabilities relative to a subensemble of the original ensemble. Nobody is choosing or updating anything; it never occurs.
Baye's relationship leads to a symmetry noted by Laplace

244365

By rewriting it as follows

244374


which leads to return of conditional probability. Which makes it possible to calculate the probability of causes by events. Laplace called this the inverse probability (Plausibility of the hypothesis). It has a lot of application in the theory of knowledge.
Frequent statisticians refuse to resonate in the plausibility of the hypotheses.

244370


He's working on hypothesis rejection. Test a hypothesis by calculating the likelihood of its results. Frequentist do not adhere to the concept of inverse probability because of the apriori which is subjective.

244373


Subjectivity also exists with the frequentist method. She's just hiding under the carpet.

/Patrick
 

Attachments

  • 0 bytes · Views: 0
  • #89
microsansfil said:
Which makes it possible to calculate the probability of causes by events.
Nothing here is about causes. Bayes' formula just relates two different conditional probabilities.
 
  • Like
Likes vanhees71
  • #90
A. Neumaier said:
What is Objective Bayesianism? Where is an authoritative classification? I am not an expert in classifying interpretations...
Many books on the philosophy of Probability theory delve into the details, but the Wikipedia link here has the most pertinent details:
https://en.wikipedia.org/wiki/Bayesian_probability#Objective_and_subjective_Bayesian_probabilities
It's mostly about prior probabilities. Objective Bayesians build off of Cox's theorem and Subjective Bayesians off of DeFinetti's work.

The best book I think on the Objective outlook is E.T. Jaynes's "Probability Theory: The Logic of Science"

For the Subjective Bayesian outlook I like J. Kadane's "Principles of Uncertainty" or DeFinetti's "Theory of Probability: A Critical Introductory Treatment"
 
  • #91
A. Neumaier said:
Not really. Frequentists just have conditional probability, i.e., probabilities relative to a subensemble of the original ensemble. Nobody is choosing or updating anything; it never occurs
Well you learn you are in a subensemble then. Does this change much? It's still the case that the theory doesn't specify when you "learn" you're in a given subensemble.

In all views you will update your probabilities, regardless of what meaning you give to this it occurs across all views. The point is that the theory never gives formal account of how this comes about. It's just a primitive of probability theory.

A. Neumaier said:
Neither are there observers or superobservers. I have never seen the notion of superobservers in classical probability of any kind
One person is including just the system in the probability model (observer), the other is including the system and the device (superobserver). That's all a superobserver is really. The notion can be introduced easily.

A. Neumaier said:
No, because both Wigner and his friend only entertain subjective approximations of the objective situation. Subjectively everything is allowed. Even logical faults are subjectively permissible (and happen in real subjects quite frequently).
I don't understand this I have to say. The Bayesian view of probability does not permit logical faults either under de Finetti or Cox's constructions. Unless you mean something I don't understand by "logical faults". In fact the point of Cox's theorem is that Probability is Logic under uncertainty.

Regarding the sentence in bold, can you be more specific about what you mean by Wigner's friend not being possible under a frequentist view? I really don't understand.
 
  • #92
DarMM said:
Wikipedia (Bayesian probability) said:
For objectivists, interpreting probability as extension of logic, probability quantifies the reasonable expectation everyone (even a "robot") sharing the same knowledge should share in accordance with the rules of Bayesian statistics, which can be justified by Cox's theorem.
What a robot finds reasonable depends on how it is programmed, hence is (in my view) subjective.
What should count as knowledge is conceptually very slippery and should not figure in good foundations.
Wikipedia (Cox's theorem) said:
Cox wanted his system to satisfy the following conditions:
  1. Divisibility and comparability – The plausibility of a proposition is a real number and is dependent on information we have related to the proposition.
  2. Common sense – Plausibilities should vary sensibly with the assessment of plausibilities in the model.
  3. Consistency – If the plausibility of a proposition can be derived in many ways, all the results must be equal.
Even though a unique plausible concept of probability comes out after making the rules mathematically precise, I wouldn't consider this objective since it depends on ''information we have'', hence on a subject.

Rather than start with a complicated set of postulates that make recourse to subjects and derive standard probability, it is much more elegant, intuitive, and productive to start directly with the rules for expectation values featured by Peter Whittle (and recalled in physicists notation in Section 3.1 of my Part II). I regularly teach applied statistics on this basis, from the scratch.

DarMM said:
The best book I think on the Objective outlook is E.T. Jaynes's "Probability Theory: The Logic of Science"
There are no objective priors, and Jaynes' principle of maximum entropy (Chapter 11) gives completely wrong results for thermodynamics if one assumes knowledge of the wrong prior and/or the wrong expectation values (e.g., that of ##H^2## rather than that of ##H##). One needs to be informed by what actually works to produce the physically correct results from max entropy. A detailed critique is in Section 10.7 of my online book.
 
Last edited:
  • Informative
Likes Mentz114
  • #93
A. Neumaier said:
There are no objective priors, and Jaynes' principle of maximum entropy (Chapter 11) gives completely wrong results for thermodynamics if one assumes knowledge of the wrong prior and/or the wrong expectation values (e.g., that of ##H^2## rather than that of ##H##. One needs to be informed by what actually works to produce the physically correct results from max entropy. A detailed critique is in Section 10.7 of my online book.
That's pretty much the argument most Subjective Bayesians use against Objective Bayesianism. Certainly I know you do not like Probability in the Foundations, thus the Thermal Interpretation. It is for this reason I mentioned it in #80
 
  • #94
DarMM said:
you learn you are in a subensemble then. Does this change much? It's still the case that the theory doesn't specify when you "learn" you're in a given subensemble.
No. You assume that you are in a subensemble. This assumption may be approximately correct, but human limitations in this assessment are irrelevant for the scientific part.

Theory never specifies which assumptions are made by one of its users. It only specifies what happens under which assumptions.
DarMM said:
In all views you will update your probabilities
I may update probabilities according to whatever rules seem plausible to me (never fully rational), or whatever rules are programmed into the robot who makes decision. But the updating is a matter of decision making, not of science.
DarMM said:
The point is that the theory never gives formal account of how this comes about.
My point is that theory is never about subjective approximations to objective matters. It is about what is objective. How humans, robots, or automatic experiments handle it is a matter of psychology, artificial intelligence, and experimental planning, respectively, not of the theory.
DarMM said:
One person is including just the system in the probability model (observer), the other is including the system and the device (superobserver). That's all a superobserver is really.
The only observers of a classical Laplacian universe are Maxwell's demons, and they cannot be included into a classical dynamics. So their superobservers aren't describable classically.
DarMM said:
I don't understand this I have to say. The Bayesian view of probability does not permit logical faults
I was talking about my views on subjective and objective. A subject isn't bound by rules. This makes all Bayesian derivations very irrelevant, no matter how respectable the literature about it. They discuss what should be the case, not what is the case. But only the latter is the subject of science. Bayesian justifications are ethical injunctions, not scientific arguments.
DarMM said:
can you be more specific about what you mean by Wigner's friend not being possible under a frequentist view?
They are of course possible, but their assessment of the situation is (in my view) just subjective musings, approximations they make up based on what they happen to know. Thus here is no need for physics to explain their findings.

What would be of interest is a setting where Wigner and his friend are both quantum detectors, and their ''knowledge'' could be specified precisely in terms of properties of their state. Only then the discussion about them would become a matter of physics.
DarMM said:
I know you do not like Probability in the Foundations, thus the Thermal Interpretation.
I have nothing at all against probability in the frequentist sense. The only problem to have these in the foundations is that frequentist statement about systems that are unique are meaningless.
But the foundations must apply to our solar system, which is unique from the point of view of what physicists from our culture can measure.
 
  • Like
Likes Auto-Didact
  • #95
A. Neumaier said:
Theory never specifies which assumptions are made by one of its users. It only specifies what happens under which assumptions.
A. Neumaier said:
But the updating is a matter of decision making, not of science.
A. Neumaier said:
My point is that theory is never about subjective approximations to objective matters. It is about what is objective
A. Neumaier said:
I was talking about my views on subjective and objective. A subject isn't bound by rules. This makes all Bayesian derivations very irrelevant, no matter how respectable the literature about it
A. Neumaier said:
They are of course possible, but their assessment of the situation is (in my view) just subjective musings, approximations they make up based on what they happen to know. Thus here is no need for physics to explain their findings
A. Neumaier said:
I have nothing at all against probability in the frequentist sense. The only problem to have these in the foundations is that frequentist statement about systems that are unique are meaningless.
But the foundations must apply to our solar system, which is unique from the point of view of what physicists from our culture can measure
Just going these, are you basically saying your reasons for not liking the typical statistical view (either Bayesian or Frequentist) of probability in the Foundations? Probability involves updating in both views, Bayesian or Frequentist.

You are basically saying you prefer a non-statistical reading of things in the Foundations as I mentioned as an alternative in #80.
 
Last edited:
  • #96
DarMM said:
are you basically saying your reasons for not liking the typical statistical view (either Bayesian or Frequentist) of probability in the Foundations? Probability involves updating in the both views Bayesian or Frequentist.
No.

I am perfectly happy with a frequentist view of classical probability as applying exactly to (fully or partially known) ensembles, to which any observer (human or not) assigns - usually as consistently as feasible - approximate values based on data, understanding, and guesswork.

But the theory (the science) is about the true ,100% exact frequencies and not about how to assign approximate values. The latter is an applied activity, the subject of applied statistics, not of probability theory. Applied statistics is a mixture of science and art, and has - like any art - subjective aspects. I teach it regularly and without any metaphyscial problems (never a student asking!) based on Peter Whittle's approach, Probability via Expectation. (Theoretical science also has its artistic aspects, but these are restricted to the exposition of the theory and preferences in the choice of material.)

The only reason I cannot accept probability in the foundations of physics is that the latter must apply to unique large systems, while the classical notion of probability cannot do this. By axiomatizing instead of probability the more basic notion of uncertainty and treating probability as derived concept, I found the way out - the thermal interpretation.

Bayesian thinking (including any updating - exact values need no updating) is not science but belongs to 100% to the art of applied statistics, supported by a little fairly superficial theory based on ugly and contentuous axioms. I had studied these in some detail many years ago, starting with the multivolume treatise on foundation of measurement by Suppes, and found this (and much more) of almost no value - except to teach me what I should avoid.
DarMM said:
That's pretty much the argument most Subjective Bayesians use against Objective Bayesianism.
They are driving out the devil with Beelzebul.
 
Last edited:
  • Like
Likes Auto-Didact
  • #97
A. Neumaier said:
The only reason I cannot accept probability in the foundations of physics is that the latter must apply to unique large systems, while the classical notion of probability cannot do this. By axiomatizing instead of probability the more basic notion of uncertainty and treating probability as derived concept, I found the way out - the thermal interpretation.
I appreciate your post, but this does seem to me to be about not liking Probability in the foundations, Bayesian or Frequentist. My main point was that most of the issues people here seem to be having with the Minimal Statistical view or similar views like Neo-Copenhagen or QBism* reduce to the issue of having a typical statistical view (again in either sense) of Probability.

As I said understanding the probabilistic terms in a new way detached from the typical views is the only way out of these issues if one does not like this. Hence the final line of #80

*They mainly differ only in whether they like Frequentism, Objective Bayesian or Subjective Bayesian approaches. They agree with each other on virtually all other issues.
 
  • #98
DarMM said:
but this does seem to me to be about not liking Probability in the foundations, Bayesian or Frequentist.
It is not about not liking it but a specific argument why having probability in the foundations makes the foundations invalid. I'd not mind having probability in the foundations if it would appear only in properties of tiny subsystems of large unique systems.
 
  • #99
A. Neumaier said:
It is not about not liking it but a specific argument why having probability in the foundations makes the foundations invalid. I'd not mind having probability in the foundations if it would appear only in properties of tiny subsystems of large unique systems.
Yes, but that's what I was talking about. The issues here seem to be issues with probability in the Foundations. The "liking" was not meant to imply you lacked an argument or were operating purely on whimsy. :smile:
 
  • #100
DarMM said:
Is all of this not just a problem related to QM being a probabilistic theory?

For example if I model a classical system such as a gas using statistical methods like Liouville evolution, I start with an initial state on the system ρ0ρ0\rho_0 and it evolves into a later one ρtρt\rho_t. Nothing in the formalism will tell me when a measurement occurs to allow me to reduce ρρ\rho to a tighter state with smaller support (i.e. Bayesian updating). Just as nothing in a probabilistic model of a dice will tell me when to reduce the uniform distribution over outcomes down to a single outcome. Nothing says what a measurement is.

Maybe a good way for you to think about the difference is that classically, the idea of preexisting hidden variables underlying measurements is very easy and natural and intuitive, to the extent that everyone (who would want to be a realist/materialist) would simply adopt a HV interpretation of classical physics that escapes all these issues around measurement.

In QM, HVs are highly constrained and unintuitive. In response, some people bite the bullet and try to still make them work, some go to many worlds, some change the physics itself (GRW). But other would-be realists decide to give up on realism, and thus face the issues with measurement and probability being fundamental.

So, I think you are right there is a very similar set of philosophical problems for a classical antirealist as a quantum antirealist, and ultimately part of being a true antirealist is not caring about this. The difference is many quantum antirealists are not true antirealists. Many are just defeated realists who only dislike antirealism slightly less than they dislike the options in quantum realism, but still believe in all the downsides of antirealism, and think this should be broadcast. Others are okay with one or more of the quantum realist options, but are forced to learn the antirealist view in textbooks, and so will talk about the issues with antirealism to try to remedy this bias. Because of these cultural realities, this debate which you correctly identify as over antirealism writ large and not specific to QM ends up being cashed out only in the context of quantum antirealism
 
  • Like
Likes eloheim, Auto-Didact and DarMM

Similar threads

Back
Top