Against "interpretation" - Comments

Click For Summary
The discussion centers on the concept of "interpretation" in quantum mechanics (QM), emphasizing that interpretations like Copenhagen and Many Worlds (MWI) yield the same experimental predictions and are thus not fundamentally different theories. Participants argue that these interpretations often reflect subjective preferences rather than resolvable disagreements, leading to limited value in their discussions. The distinction between theories and models is highlighted, suggesting that interpretations should be viewed as informal descriptions rather than separate theories. There is a consensus that while interpretations may help in understanding QM, they do not provide new predictions, and the search for a definitive interpretation may be futile. The conversation underscores the importance of rigorous understanding of theories and models in scientific discourse.
  • #151
Dale said:
Nonsense
Please read my whole posts and don't make ridiculous arguments with meaningless theories!

As already said, the mathematical framework of a successful physical theory have (and must have) must have enough of their important concepts labelled not a,b,c but with sensible concepts from the world of experimental physics so that the subjective part of the interpretation is constrained enough to be useful.

For example, take the mathematical framework defined by ''Lines are sets of points. Any two lines intersect in a unique point. There is a unique line through any two points.'' (This defines the mathematical concept of a projective plane.) This is sufficiently constrained that every schoolboy knows without any further explanation how to apply it to experiment, and can check its empirical validity. There are some subjective interpretation questions regarding parallel lines, whose existence would be thought to falsify the theory, but the theory is salvaged by allowing in the subjective interpretation points at infinity. Another, more sophisticated subjective interpretation treating lines as grand circles on the sphere (undistinguishable by poor man's experimental capabilities) would be falsifiable since there are multiple such lines through antipodal points.

This shows that there is room for nontrivial subjective interpretation, and that the discussion of their testability is significant, as it may mean progress, by adding more details to the theory in a way eliminating the undesired interpretations.

Dale said:
you cannot do an experiment with only that “experimental meaning”. It is insufficient for applying the scientific method.
What did you expect? A mathematical framework of 4 characters is unlikely to give much information about experiment. It says no more than what I claimed.

Most theories are inconsistent with experiment, and only a few, successful ones are consistent with them. Only these are the ones the philosophy of science is about, and they typically are of textbook size!
Dale said:
Suppose I do an experiment and measure 6 values: 1, 2, 3, 4, 5, 6. Using only the above framework and your supposed “experimental meaning” do the measurements verify or falsify the theory?
They verify the theory if you measured a=2, b=3, c=6, and they falsify it if you measured a=2, b=3, c=5. Given your framework, both are admissible subjective interpretations. Your framework is too weak to constrain the subjective interpretation, so some will consider it correct, others invalid, and still others think it is incomplete and needs better foundations. The future will tell whether your new theory ##ab=c## will survive scientific practice...

Just like in the early days of quantum mechanics, where the precise content of the theory was not yet fixed, and all its (subjective since disagreeing) interpretations had successes and failure - until a sort of (but not unanimous) consensus was achieved.
 
  • Like
Likes Auto-Didact and dextercioby
Physics news on Phys.org
  • #152
A. Neumaier said:
As already said, the mathematical framework of a successful physical theory have (and must have) must have enough of their important concepts labelled not a,b,c but with sensible concepts from the world of experimental physics
What you are describing here is more than just the mathematical framework. That is the mathematical framework plus a mapping to experiment. This mapping to experiment is what distinguishes a scientific theory from a mathematical framework. That is the objective interpretation.

I read your posts, but you are using the words in such a strange way that reading doesn’t help. What I wrote above is directly what I got from reading it.
 
  • #153
Dale said:
What you are describing here is more than just the mathematical framework. That is the mathematical framework plus a mapping to experiment.
The names are traditionally part of the mathematical framework, not a separate interpretation. Look at any mathematical theory with some relation to ordinary life, e.g., the modern axioms for Euclidean geometry or for real numbers, or Kolmogorov's axioms for probability!

The naming provides a mapping of mathematical concepts to concepts assumed already known (i.e., to informal reality, as I use the term). This part is the objective interpretation and is independent of experiment. This is necessary for a good theory, since the relation between a mathematical framework and its physics must remain the same once the theory is mature. A mature scientific theory fixes the meaning of the terms uniquely on the mathematical level so that there can be no scientifically significant disagreement about the possible interpretation, using just Callen's criterion for deciding upon the meaning.

On the other hand, experimental art changes with time and with improving theory. We now have many more ways of measuring things than 100 years ago, which usually even need theory to even be related to the old notions. There are many thousands of experiments, and new and better ones are constantly devised - none of these experiments appear in the objective interpretation part of a theory - at best a few paradigmatic illustrations!

The theories that have a fairly large and still somewhat controversial interpretation discussions are probability theory, statistical mechanics, and quantum mechanics. It is not a coincidence that precisely in these cases, the naming does not suffice to pin down the concepts sufficiently to permit an unambiguous interpretation. Hence the need arose to add more interpretive stuff. Most of the extra stuff is controversial, hence the many interpretations. The distinction between subjective and objective interpretation does not help here, because people do not agree upon the meaning that should deserve the label objective!

Please reread my post #147 in this light.

Dale said:
I read your posts, but you are using the words in such a strange way that reading doesn’t help. What I wrote above is directly what I got from reading it.
Well, I had said,

A. Neumaier said:
As I said, in simple cases, the interpretation is simply calling the concepts by certain names. In the case of classical Hamiltonian mechanics, ##p## is called momentum, ##q## is called position, ##t## is called time, and everyone is supposed to know what this means, i.e., to have an associated interpretation in terms of reality.
I cannot understand how this can be misinterpreted after I had explained that for me, reality just means the connection to experiment.

Dale said:
Anything necessary to predict the outcome of an experiment is objective.
But in probability theory, statistical mechanics, and quantum mechanics, different people differ in what they consider necessary. So how can it be objective?

Dale said:
It is a perfectly valid mathematical framework, one of the most commonly used ones in science.
No. ##ab=c## is just a formula. Without placing it in a mathematical framework it does not even have an unambiguous mathematical meaning.

The mathematical framework to which it belongs could be perhaps Peano arithmetic. This contains much more, since it says what natural numbers are (in purely mathematical terms), how they are added and multiplied, and that the variables denote arbitrary natural numbers.

Then ##ab=c## gets (among many others) the following experimental meaning: Whenever you have a children, b apples, and c ways of pairing children and apples then the product of a and b equals c. This is testable and always found correct. (If not one questions the counting procedure and not the theory.)

Thus no interpretation is needed beyond the mathematical framework itself. Every child understands this.
 
  • #154
A. Neumaier said:
The names are traditionally part of the mathematical framework, not a separate interpretation.
While that is true, within the mathematical framework itself the names are merely arbitrary symbols. This is why a, b, and c are perfectly valid elements of the mathematical framework of a scientific theory.

The mapping to experiment is separate from the mathematical framework itself, even when the names are highly suggestive. This becomes particularly important when different theories use the same name for different concepts. The mapping to experiment is different because the names are merely arbitrary symbols, and the same name does not force the same mapping for different theories.

A. Neumaier said:
The naming provides a mapping of mathematical concepts to concepts assumed already known
My understanding of your previous comments was that this mapping is precisely what we were calling the “objective interpretation”, not the mathematical framework. Otherwise the objective interpretation is empty. I am fine with that, but it is a change from the position I thought you were taking above.

A. Neumaier said:
So how can it be objective?
“Objective” was your word, not mine. I am not sure why you are complaining to me about your own word.

A. Neumaier said:
a children, b apples, and c ways of pairing children and apples
Again, I understood from our previous discussion that this mapping from the mathematical symbols to experimental quantities is what we were calling the objective interpretation.
 
Last edited:
  • #155
DarMM said:
This is just a difference in the use of the word "Foundations", which is sometimes used to include interpretations.

Also see the parts in bold.

"There is no debate in Foundations of probability if we ignore the guys who say otherwise and one of them lost anyway, in my view"

Seems very like the kind of thing I see in QM Foundations discussions.

"Ignore Wallace's work on the Many Worlds Interpretation it's a mix of mathematics and philosophical polemic"
(I've heard this)
"Copenhagen has been shown to be completely wrong, i.e. Bohr lost" (also heard this)

In my opinion there's a major lack of focus in your post. My comment about de Finetti had to do with axioms used (finite vs countable addivitiy). Axioms selected has really nothing to do with QM interpretations.

DarMM said:
I think if I asked a bunch of subjective Bayesians I'd get a very different view of who "won" and "lost".

Jaynes is regarded as a classic by many people I've spoken to, I'm not really sure why I should ignore him.

I don't know why we're talking about best seller general audience books.

As I've already said, the books mentioned in posts 109 and 111 did not include Jaynes' book. I'm trying to be disciplined and actually keep the line of conversation coherent. Jaynes' views were said to be addressed by a different author and that is what my posts have been about.

I never asserted anything was a "best seller general audience book" and I don't think sales have much to do with anything here. I did say that the books mentioned were not math books and they were aimed at a general audience.

Bayesians are in general fine with Kolmogorov formulation of probability. I don't know what you're talking about here... it seems @atyy already addressed this.

DarMM said:
"Foundations" here includes interpretations, so "Kolmogorov vs Jaynes" for example was meant in terms of their different views on probability. There are others like Popper, Carnap. Even if you don't like the word "Foundational" being applied it doesn't really change the basic point.
I've actually read a couple of Popper books, but I don't care about what he has to say about probability --he was not mathematically sophisticated enough. I struggle to figure out why you bought up philosophers here. It's something of a red flag. If you brought up, say the views of some mixture of Fisher, Wald, Doob, Feller and some others, that would be a very different matter.

DarMM said:
Also note that in some cases there is disagreement over which axioms should be the Foundations. Jaynes takes a very different view from Kolmogorov here, eschewing a measure theoretic foundation.

I don't know what this has to do with anything. Measures are the standard analytic glue for probability. That's the settled point. There are also non-standard analysis formulations of probability (e.g. Nelson). The book I referenced by Vovk and Shafer actually tries to redo the formulation of probability, getting rid of measure theory in favor of game theory. The mechanism is betting. It's a work in progress designed to try to get people to think in a different way.

I don't think Jaynes had a complete formulation of probability but that isn't the main problem. He's perfectly fine to read if you already know a lot about probability. Part of the problem is that people who don't know much about probability read his book and then they over-fit their understanding of probability theory to his polemic. The fact that you keep bringing him up is very worrisome in this regard.
 
  • #156
atyy said:
...Bayesians can use the Kolmogorov axioms, just interpreted differently. (And yes, interpretation is part of Foundations, but the Kolmogorov part is settled.)

I think interpretation is even settling, with de Finetti having won in principle, but in practice one uses whatever seems reasonable, or both as this cosmological constant paper did: https://arxiv.org/abs/astro-ph/9812133.

I didn't understand the the italicized part. Countable additivity is typically used because it is mathematically convenient. I'm only aware of a small handful of serious probability works that use finite additivity (e.g. Dubins and Savage's book in addition to de Finetti). I skimmed the link and don't really get your comment. When people say things like

"the ratio of densities is a special, infinitesimal value of order ##10^{−100}## in order for the two densities to coincide today. "
I infer that mathematical subtleties don't have much to do with it.

Perhaps you are referring to something else related to de Finetti?
 
  • #157
StoneTemplePython said:
I didn't understand the the italicized part. Countable additivity is typically used because it is mathematically convenient. I'm only aware of a small handful of serious probability works that use finite additivity (e.g. Dubins and Savage's book in addition to de Finetti). I skimmed the link and don't really get your comment. When people say things like

"the ratio of densities is a special, infinitesimal value of order ##10^{−100}## in order for the two densities to coincide today. "
I infer that mathematical subtleties don't have much to do with it.

Perhaps you are referring to something else related to de Finetti?

I wasn't thinking of finite additivity at all. I think modern Bayesians use Kolmogorov eg. http://www.stat.columbia.edu/~gelman/research/published/philosophy.pdf is written by a Bayesian and a frequentist (maybe I'm oversimplifying), and both accept Kolmogorov. Just that in general, Bayesian thinking is valued for its intellectual framework of coherence eg. http://mlg.eng.cam.ac.uk/mlss09/mlss_slides/Jordan_1.pdf. Also, the concept of exchangeability and the representation theorem are generally taught nowadays, at least in statistics/machine learning: https://people.eecs.berkeley.edu/~jordan/courses/260-spring10/lectures/lecture1.pdf

Since this is a quantum thread, let's add https://arxiv.org/abs/quant-ph/0104088 as another example of de Finetti's influence.
 
Last edited:
  • #158
StoneTemplePython said:
I don't think Jaynes had a complete formulation of probability but that isn't the main problem. He's perfectly fine to read if you already know a lot about probability. Part of the problem is that people who don't know much about probability read his book and then they over-fit their understanding of probability theory to his polemic. The fact that you keep bringing him up is very worrisome in this regard.

One failure of Jaynes' relevant for a quantum thread is that he did not understand the Bell theorem https://arxiv.org/abs/quant-ph/0301059 (yeah, we might have banned him on PF as a crackpot) ...
 
Last edited:
  • #159
StoneTemplePython said:
In my opinion there's a major lack of focus in your post. My comment about de Finetti had to do with axioms used (finite vs countable addivitiy). Axioms selected has really nothing to do with QM interpretations.
Some different interpretations of QM use different axioms, so I don't see how this is true. And like De Finetti's approach these alternate axioms have had later advocates extend them or add something to them to recover the "standard" theory. Such as monotone continuity for De Finetti's approach to get countable additivity, or Wallace's axioms for Everett's approach to QM.

As for the rest of your post, I don't understand what's really wrong with @A. Neumaier 's references or why discussion should be confined to them (apparently introducing any new references is "unfocused"). I'm not going to go on and on with this, it's a simple fact that there are several interpretations of probability with debate and discussion over them. The only way you seem to be getting around this is by saying anybody referenced is wrong in some way, Jaynes is "worrisome", Popper is "just a philosopher", @A. Neumaier 's references are just "general audience write ups". There simply is disagreement over the interpretation of probability theory, I don't really see why you'd debate this.

You'll even see it in textbooks with Feller criticizing Bayesians and Jaynes then criticizing Feller in turn.

Also I really don't get why referencing Jaynes is "worrisome", he's polemical and there are many topics not covered in his book and gaps in what his treatment can cover, as well as his errors in relation to Bell's theorem as @atyy said (it's probable he didn't understand Bell's work). However it's a well regarded text, so I don't see the problem with simply referencing him.

StoneTemplePython said:
Bayesians are in general fine with Kolmogorov formulation of probability
I never said they weren't.

I actually don't understand what your point of contention is.
The way I see it:
  1. Is there debate in the interpretation of probability theory? Yes.
  2. Do some of these different interpretive approaches go via different axioms? Sometimes yes.
  3. Nevertheless is there a commonly agreed axiomatic basis? Yes, Kolmogorov's (most of whose axioms in some of the other approaches become theorems).
  4. Is such debate mostly confined to a smaller more philosophical community, at times actually philosophers and much rarer in on the ground practice? Yes.
This seems very like the situation in QM to me, which is what I was saying to @bhobba
 
Last edited:
  • Like
Likes dextercioby
  • #160
Hmmm, Renner is one of the people who did a quantum version of the de Finetti theorem. Maybe that is enough to forgive him the Frauchiger and Renner papers :P

https://arxiv.org/abs/quant-ph/0512258
 
  • Like
Likes Auto-Didact, DarMM and Demystifier
  • #161
atyy said:
Hmmm, Renner is one of the people who did a quantum version of the de Finetti theorem. Maybe that is enough to forgive him the Frauchiger and Renner papers :P

https://arxiv.org/abs/quant-ph/0512258
If you can forgive Renner, maybe you could also forgive Ballentine for his misunderstanding of collapse, decoherence and the quantum Zeno? :biggrin:
 
  • Like
Likes Auto-Didact and bhobba
  • #162
Demystifier said:
If you can forgive Renner, maybe you could also forgive Ballentine for his misunderstanding of collapse, decoherence and the quantum Zeno? :biggrin:

I guess I forgive Renner more easily because I didn't spend much time on Frauchiger and Renner (I thought it was like perpetual motion machines), and you and DarMM sorted it out for me. OTOH I wasted so much time with Ballentine because he was rated so highly on this forum. And what good thing did Ballentine do equivalent to Renner's quantum de Finetti contribution?

BTW, I haven't forgiven Renner yet :biggrin:
 
  • Like
Likes bhobba, Demystifier and DarMM
  • #163
atyy said:
And what good thing did Ballentine do equivalent to Renner's quantum de Finetti contribution?
Appart from the wrong parts, I still think that his book is the best graduate general QM textbook that exists. And as with Renner, I always have much more respect for being non-trivially wrong than for being trivially right.
 
  • Like
Likes bhobba
  • #164
Demystifier said:
Appart from the wrong parts, I still think that his book is the best graduate general QM textbook that exists. And as with Renner, I always have much more respect for being non-trivially wrong than for being trivially right.

I guess we differ on whether they are trivially wrong or non-trivially wrong. To me it seems that both Ballentine and Frauchiger and Renner are interested in the wrong problems in quantum foundations, and never properly address the measurement problem (the only problem of real worth in quantum foundations).

http://schroedingersrat.blogspot.com/2013/11/do-not-work-in-quantum-foundations.html

Incidentally, the papers by Renner mentioned by Schroedinger's rat did address things closer to the measurement problem.
 
  • Like
Likes Auto-Didact and Demystifier
  • #165
Demystifier said:
Appart from the wrong parts, I still think that his book is the best graduate general QM textbook that exists. And as with Renner, I always have much more respect for being non-trivially wrong than for being trivially right.

I guess the Frauchiger and Renner paper is more non-trivially wrong from the Bohmian point of view (from Copenhagen their setup just seems wrong). So perhaps that's another point in favour of forgiving them - they are unconscious Bohmians :)
 
  • Like
Likes Demystifier
  • #166
  • Like
Likes bhobba, atyy and DrChinese
  • #167
Dale said:
While that is true, within the mathematical framework itself the names are merely arbitrary symbols. This is why a, b, and c are perfectly valid elements of the mathematical framework of a scientific theory.
No. There is a huge difference between a formula (which is meaningless outside of a mathematical framework) and a mathematical framework itself, which is a logical system giving a complete set of definitions and axioms withing which formulas become meaningful. While the names of concepts are in principle arbitrary, once chosen, they mean the same thing throughout (unlike variables) - to the extent that one can understand math texts written in a different language by restoring the familiar wording, without knowing the language itself.

The axioms and definitions carry the complete intrinsic meaning. With Peano's system of axioms you recover everywhere in the universe, no matter which language is used, the same concept of counting, no matter how it is worded, and this is enough to reconstruct the meaning, and then apply it to reality by devising experiment to check its usefulness.

Dale said:
The mapping to experiment is separate from the mathematical framework itself, even when the names are highly suggestive.
This becomes particularly important when different theories use the same name for different concepts. The mapping to experiment is different because the names are merely arbitrary symbols, and the same name does not force the same mapping for different theories.
Most experiments are never mapped to theory by the content of a book on theoretical physics, but they are used to test these theories. This is possible precisely because the theory cannot be mapped arbitrarily to experimental physics without becoming obviously wrong. In a mature theory there is only one way to do the mapping, given the mathematical framework (with axioms, definitions, and results) - even when the names of the concepts are unfamiliar. Unlike in you caricature of a mathematical framework that means nothing at all without context.

Dale said:
Again, I understood from our previous discussion that this mapping from the mathematical symbols to experimental quantities is what we were calling the objective interpretation.
An arbitrary mapping from the mathematical framework to experimental quantities is a valid interpretation iff it satisfies Callen's criterion. In a sufficiently mature theory (such as projective geometry) there is only one such mapping (apart from universal symmetries in the mathematical framework). Thus the mathematical framework alone determines the objective interpretation in this sense, the meaning of everything, and the falsifiability of the theory. Precisely this is the reason why there are no discussions about interpretation in most good theories.

But the current theory of quantum mechanics is underspecified since it uses the undefined notions of measurement and probability in its axioms and hence leaves plenty of room for interpretation.
 
Last edited:
  • Like
Likes dextercioby and Auto-Didact
  • #168
A. Neumaier said:
No. There is a huge difference between a formula (which is meaningless outside of a mathematical framework) and a mathematical framework itself, which is a logical system giving a complete set of definitions and axioms withing which formulas become meaningful.
Yes, that is a good point. I concede this. So for the previous example the framework would have to include the standard axioms of algebra and arithmetic with real numbers.

A. Neumaier said:
While the names of concepts are in principle arbitrary, once chosen, they mean the same thing throughout (unlike variables) - to the extent that one can understand math texts written in a different language by restoring the familiar wording, without knowing the language itself.
Fair enough. So a is the Daleage, b is the Neumaierian , and c is the Demystifier number. Now we have a fully specified mathematical framework complete with names of concepts, axioms, and formulas. And yet, it is impossible from this alone to determine if an experiment validates or falsifies the theory. This is therefore a counter-example to your claim.

A. Neumaier said:
In a mature theory there is only one way to do the mapping, given the mathematical framework (with axioms, definitions, and results). Unlike in you caricature of a mathematical framework that means nothing at all without context.
The context is not part of the framework. That should be obvious. The very meaning of "context" implies looking outside of something to see how it fits into a broader realm beyond itself. The whole purpose of the caricature was to remove the context and look only at the mathematical framework itself. From that "toy" exercise it is clear that the framework is insufficient for experimental testing.

If context is required for the mapping then the framework is insufficient by definition of “context”.

A. Neumaier said:
Thus the mathematical framework alone determines the objective interpretation in this sense
I completely reject this assertion. Certainly, the common usage of the term "theory" states that something in addition to the mathematical framework is required to make the mapping to experiment.
 
Last edited:
  • #169
atyy said:
I wasn't thinking of finite additivity at all. I think modern Bayesians use Kolmogorov eg. http://www.stat.columbia.edu/~gelman/research/published/philosophy.pdf is written by a Bayesian and a frequentist (maybe I'm oversimplifying), and both accept Kolmogorov. Just that in general, Bayesian thinking is valued for its intellectual framework of coherence eg. http://mlg.eng.cam.ac.uk/mlss09/mlss_slides/Jordan_1.pdf. Also, the concept of exchangeability and the representation theorem are generally taught nowadays, at least in statistics/machine learning: https://people.eecs.berkeley.edu/~jordan/courses/260-spring10/lectures/lecture1.pdf

This kind of is the problem... lots of ambigous language. I inferred finite addiviity many posts ago... but this wasn't the right inference at all it seems.

Exchangeability is a big umbrella but really is just specializing symmetric function theory to probability. Off the top of my head, I would have said typical use cases are really martingale theory (e.g. Doob backward martingale). But yes, graphical models and a whole host of other things can harness this. We're getting in the weeds here... a lot of big names have worked on exchangeability.

I have again lost the scent of how this is somehow related to a different kind of probability advocated by de Finetti. I have unfortunately remembered why I dislike philosophy these days.
- - - -
re: Bayes stuff... it is in some ways my preferred what of thinking about things. But people try to make it into a cult, which is unfortunate. As you've stated correctly, frequentists and bayesians are still using the same probability theory -- they just meditate on it rather differently.
 
  • #170
Demystifier said:
Appart from the wrong parts, I still think that his book is the best graduate general QM textbook that exists.

So do I and many here do also. But it can polarize - a number of people here are quite critical of it. I guess it's how you react to the wrong bits - they are there for sure but for some reason do not worry me too much - probably because there are not too many and are easy to spot and ignore. Of greater concern to me personally is Ballentine's dismissal of decoherence as important in interpretations - he thinks decoherence is an important phenomena, just of no value as far as interpretations go:
https://core.ac.uk/download/pdf/81824935.pdf
'Decoherence theory is of no help at all in resolving Schrödinger’s cat paradox or the problem of measurement. Its role in establishing the classicality of macroscopic systems is much more limited than is often claimed.'

That however would be a thread all by itself :rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes:

Thanks
Bill
 
  • Like
Likes Auto-Didact and Demystifier
  • #171
Demystifier said:
Now suppose that someone else develops another theory T2 that makes the same measurable predictions as T1. So if T1 was a legitimate theory, then, by the same criteria, T2 is also a legitimate theory. Yet, for some reason, physicists like to say that T2 is not a theory, but only an interpretation. But how can it be that T1 is a theory and T2 is only an interpretation? It simply doesn’t make sense.
Scientific approach requires that prediction is made before experiment. So I can see a way how T1 is considered as something more than T2. Say T1 is verified by experiment (T1 has made predictions before experiment) but T2 is developed later by knowing experimental results with which it has to agree and it does not produce any new predictions. Then T1 is verified but T2 is not, even so they give exactly the same predictions.
And there is good reason for that rule that predictions have to be produced before experiment - people are very good at cheating themselves.

There is another thing I would like to add concerning discussion about the topic. Theory has to include things needed for it to produce testable predictions. But then QM as a statistical theory makes this task difficult and ambiguous. There is a lot of event based reasoning on experimental side before we get statistics (consider coincidence counters for example). And on one hand QM as a statistical theory can not replace that event based classical reasoning but on the other hand it overlaps with classical theories and is more correct, so it sort of should replace it.
So to me it seems that without something we usually call "interpretation" QM connection to experiments remains somewhat murky.
 
  • Like
Likes kurt101
  • #172
I can't help wondering that, however interesting this thread is, it is metaphysics rather than physics?

If I am mistaken could you explain why.

Regards Andrew
 
  • Like
Likes dextercioby, Mentz114 and Dale
  • #173
Dale said:
Fair enough. So a is the Daleage, b is the Neumaierian , and c is the Demystifier number. Now we have a fully specified mathematical framework complete with names of concepts, axioms, and formulas. And yet, it is impossible from this alone to determine if an experiment validates or falsifies the theory. This is therefore a counter-example to your claim.
No. Each time I measure two numbers a and b I can apply your theory and say, ''Ah, if I interpret a as the Daleage and b as the Neumaierian then their product is the Demystifier number. Interesting'' (or boring).

This is the same as what happens when applying quantum mechanics to experiment. We deduce information about the wave function (a purely theoretical concept) by interpreting certain experimental activities as instances of the theory.

Dale said:
The context is not part of the framework.
I agree. It is also not part of the theory. Thus your example is ridiculous.

The example of projective planes shows that the framework itself, if it is good enough, contains everything needed to apply it in a context appropriate for the theory. This holds even when the naming is different. The context has its structure and the theory has its structure, and anyone used to recognizing structure will recognize the unique way to match them such that the theory applies successfully.

A. Neumaier said:
An arbitrary mapping from the mathematical framework to experimental quantities is a valid interpretation iff it satisfies Callen's criterion. In a sufficiently mature theory (such as projective geometry) there is only one such mapping (apart from universal symmetries in the mathematical framework). Thus the mathematical framework alone determines the objective interpretation in this sense, the meaning of everything, and the falsifiability of the theory.
Dale said:
I completely reject this assertion. Certainly, the common usage of the term "theory" states that something in addition to the mathematical framework is required to make the mapping to experiment.
Yes, namely the experience of the experimenter. The relation between theory and experiment is far more complex than the few hints a given in a book on theoretical physics. It is not the subject of such books but of books on experimental physics!
Dale said:
I also found a paper entitled "What is a scientific theory?" by Patrick Suppes from 1967 (Philosophy of Science Today) who says "The standard sketch of scientific theories-and I emphasize the word 'sketch'-runs something like the following. A scientific theory consists of two parts. One part is an abstract logical calculus ... The second part of the theory is a set of rules that assign an empirical content to the logical calculus. It is always emphasized that the first part alone is not sufficient to define a scientific theory".
As he describes this as the "standard sketch" and as this also agrees with the Wikipedia reference and my previous understanding, then I take it that your definition of theory is not that which is commonly used.
But Suppes says there:
Suppes said:
scientific theories cannot be defined in any simple or direct way in terms of other non-physical, abstract objects. [...] To none of these questions do we expect a simple and precise answer. [...] This is also true of scientific theories.
He calls your view the ''standard sketch'' meaning that this is (i) the (uninformed) usually heard opinion and (ii) a vast simplification. Then he gives his (better informed) critique of the standard sketch, which he disqualifies as ''highly schematic'' and ''relatively vague'', and refers to ''different empirical interpretations''. Thus he says that the same theory has different empirical interpretations, which therefore cannot be part of the theory!
Suppes said:
It is difficult to impose a definite pattern on the rules of empirical interpretation.
Then he talks about ''models of the theory [...] highly abstract'', which makes sense only if his view of theory is just the mathematical framework which is the meaning he then uses throughout. On p.62, he talks about ''the necessity of providing empirical interpretation of a theory''. This formulation makes sense only if one identifies ''theory = the formal part'' and treats the interpretation as separate! Then he goes on saying that the formulations in the standard sketch
Suppes said:
have their place in popular philosophical expositions of theories, but in the actual practice of testing scientific theories a more elaborate and more sophisticated formal machinery for relating a theory to data is required. [...] There is no simple procedure for giving co-ordinating definitions for a theory. It is even a bowdlerization of the facts to say that co-ordinating definitions are given to establish the proper connections between models of the theory and models of the experiment.
and then he discusses (starting p.63 bottom) the morass one enters if one wants to take your definition seriously!

So the only clean and philosophically justified conceptual division is to have
  • theory = mathematical framework (which includes suggestive names attached to the concepts)
  • interpretation = explaining the suggestive names on the basis of informal key examples from experiment.
 
Last edited:
  • Like
Likes dextercioby
  • #174
andrew s 1905 said:
I can't help wondering that, however interesting this thread is, it is metaphysics rather than physics?

If I am mistaken could you explain why.
It's about semantics with a bits from philosophy of science. As to why - metaphysics "examines the fundamental nature of reality" [wikipedia] but this discussion is about borders between mathematics, theory and interpretation (philosophy of science) and common meaning for last two terms (semantics).
 
  • #175
A. Neumaier said:
Each time I measure two numbers a and b I can apply your theory and say, ''Ah, if I interpret a as the Daleage and b as the Neumaierian then their product is the Demystifier number. Interesting'' (or boring).
Exactly. You have to go beyond the mathematical framework and map the experimental results to the labels.

A. Neumaier said:
I agree. It is also not part of the theory. Thus your example is ridiculous.
The part of the context that gives the mapping from the framework to experiment is part of the theory. This is precisely the point where your misuse of the term “theory” is causing problems.

A. Neumaier said:
He calls your view the ''standard sketch'' meaning that this is (i) the (uninformed) usually heard opinion and (ii) a vast simplification. Then he gives his (better informed) critique of the standard sketch,
Yes. But as far as I am aware the standard sketch remains the standard meaning of the terms and the scientific community has not adopted his “better informed” opinion. I.e. even someone who is openly antagonistic to the standard definition admits that it is in fact the standard definition.

At this point I think that further discussion is pointless (as is always the case in interpretations discussions here). You are clearly going to continue to use the non-standard terminology, and so you are going to continue to have multi-page semantic arguments due to the fact that you are using non-standard terminology. I also think that your supportive references don't actually support your position. In particular, the Wikipedia reference on interpretation uses the term "reality" instead of your term "observation/reality", and also your reference to Callen doesn't support your point since he is talking about the "theory" (mathematical framework + mapping to experiment) and not just the "mathematical framework" as you claim.

You are free to get the last word in, but I am disengaging at this point. Your whole approach uses a non-standard meaning for the word "theory" and I will not adopt your meaning and I suspect you will not adopt the standard meaning. It is the standard meaning, despite your distaste for it.
 
Last edited:
  • #176
Dale said:
Exactly. You have to go beyond the mathematical framework and map the experimental results to the labels.
But this mapping is not part of the theory; it is done by the experimenter who wants to use the theory. Each time someone finds a new way of testing the theory, the mapping changes! In your case, there are many possibilities to do the mapping, hence a multitude of interpretations.

Dale said:
The part of the context that gives the mapping from the framework to experiment is part of the theory. This is precisely the point where your misuse of the term “theory” is causing problems.
Then please expand your theory fragment (or another one of your choice) to a complete theory that we can discuss.

Dale said:
even someone who is openly antagonistic to the standard definition admits that it is in fact the standard definition.
No. he does not even give it the status of a definition - he admits only that it is the standard sketch, and emphasizes this weak status!

Dale said:
But as far as I am aware the standard sketch remains the standard meaning of the terms and the scientific community has not adopted his “better informed” opinion.
You are indeed not aware of the state of the art! Not only Suppes, your only witness among the philosophers of science, but also Wikipedia, your only other authoritative source, testify against you:

Wikipedia said:
A scientific theory is an explanation of an aspect of the natural world that can be repeatedly tested and verified in accordance with the scientific method, using accepted protocols of observation, measurement, and evaluation of results. [...] theory [...] describes an explanation that has been tested and widely accepted as valid.
It explicitly separates scientific theory (''an explanation of an aspect of the natural world"") and the relation to experiment (''the scientific method'').

Wikipedia cites other authorities to support its definition; none of them requires a map between theory and experiment as part of the theory:
Stephen Jay Gould said:
Theories are structures of ideas that explain and interpret facts
The United States National Academy of Sciences said:
The formal scientific definition of theory is quite different from the everyday meaning of the word. It refers to a comprehensive explanation of some aspect of nature that is supported by a vast body of evidence.
The American Association for the Advancement of Science said:
A scientific theory is a well-substantiated explanation of some aspect of the natural world, based on a body of facts that have been repeatedly confirmed through observation and experiment.
Wikipedia said:
The logical positivists thought of scientific theories as statements in a formal language.
Wikipedia said:
The semantic view of theories, which identifies scientific theories with models rather than propositions, has replaced the received view as the dominant position in theory formulation in the philosophy of science. A model is a logical framework [...] One can use language to describe a model; however, the theory is the model (or a collection of similar models), and not the description of the model. A model of the solar system, for example, might consist of abstract objects that represent the sun and the planets. These objects have associated properties, e.g., positions, velocities, and masses.
This is exactly my view, except that they have ''logical framework'' where you had suggested the term ''mathematical framework''!
Wikipedia said:
Engineering practice makes a distinction between "mathematical models" and "physical models"
Wikipedia said:
In physics, the term theory is generally used for a mathematical framework
Maybe our dispute comes from the fact that you are an engineer and I am a mathematician and physicist!
But note that this discussion is in a physics forum, not an engineering forum.
 
  • #177
A. Neumaier said:
Each time someone finds a new way of testing the theory, the mapping changes!

It might be helpful if you would give a concrete example. Perhaps this would qualify as one: the "theory" (for your meaning of that term) is the standard quantum theory of a qubit. Two different possible "mappings" are: interpreting the qubit theory as describing the spin of an electron in a Stern-Gerlach experiment; interpreting the qubit theory as describing the polarization of a photon passing through a beam splitter. Is that the sort of thing you have in mind?

If it is, then I think you are using the term "interpretation" differently from the way it is used when discussing the foundations of QM (which is the meaning of "interpretation" that this thread is supposed to be discussing). QM "interpretation" has nothing to do with which particular experiment you are analyzing. It has to do with what kind of story you tell about what is happening in the experiment. In the above example, say we pick the first "interpretation" in your sense: we interpret the quantum theory of the qubit as describing the spin of an electron. Then we still have different possible QM interpretations: a collapse interpretation says the spin of the electron collapses into an eigenstate when it passes through the Stern-Gerlach device; the many worlds interpretation says the electron's spin gets entangled with its momentum so it ends up in a superposition of two states, one with "up" spin coming out of the device in one direction, the other with "down" spin coming out of the device in a different direction. But there is no way to tell experimentally which of these "interpretations" is right, and these "interpretations" have nothing to do with how you match up the math of the standard quantum theory of a qubit with experimental observations.
 
  • #178
PeterDonis said:
It might be helpful if you would give a concrete example.
I already gave in #153 and #167 the example of the axioms for natural numbers (plus a little finite set theory for the Cartesian product), and in #151 and #173 that of projective planes. What they demonstrate is the key to a correct understanding. (Though it doesn't quite answer your query - please be patient!)

Let me give a complete example inspired by Onaep, a little known Russian contemporary of François Viète (who invented the notion of variables).

Onaep said:
I is a rebmun. If Z is a rebmun then ZI is a rebmun. If ZI=YI then Z=Y. Never ZI=I. Every rebmun is generated in this way.

In 1600, Dnikeded, an ambitious student of math, sits in Prof. Onaep's class, being told that he is a capacity in the field of applied algebra. He is reading the above (as part of one of Onaep's exercises) for the first time and has not the slightest idea what he is talking about. He never heard of anything called rebmun. Determined to figure out the meaning and being familiar with Viete's concept of variables, he plays with the statements given. Well, at least he knows that I is a rebmun. Setting Z=I he discovers that II is a rebmun. Setting Z=II he discovers that III is a rebmun. Setting Z=III he discovers that IIII is a rebmun. Setting Z=IIII he discovers that IIIII is a rebmun. This reminds him of counting. Each new rebmun is obtained by adding an I to the previous rebmun. The process goes on for ever... Remembering what he had learned already about algebra, Dnikeded noticed that the rebmuns could be interpreted in terms of stuff he was familiar with - numbers. If he identified I with 1 then he could equate II with 2, III with 3, IIII with 4, IIIII with 5, etc. ''Ah, this is a variant of the way we count the number of beers in the pub,'' he thought, ''except that each 5th bar would be drawn vertically, a minor issue that doesn't really change things.'' But well, there were more properties: If ZI=YI then Z=Y. ''True - if my friend and I both order a beer and then have the same number of beers, we must have had the same number of beers before. Thus Onaeps theory is predictive, and things come out correctly. Let me try the next item, never ZI=I; can I falsify my interpretation?'' He tries and finds no problem with it - I is too short to be of the form ZI. Dnikeded is left with the final statement to be figured out. He thinks about what he can generate so far: 1,2,3,4,5,6,7,8,9,10,11,... a never ending list of numbers. But neither 0 nor fractions like 2/3. Also no negative numbers. Suddenly everything makes sense. ''Ah, I finally understand. rebmuns are nothing else than the numbers I have been familiar with since childhood, before I got interested in more advanced number theory!''


The mapping from theory to reality/experiment that Dale was conjuring up as belonging to the theory appeared out of nowhere!

The map is not provided by the theory, but it exists for any theory that deserves to be called scientific. The map is created/discovered (and has to be recreated/rediscovered by each individual) in the process of understanding the meaning of a scientific theory! Initially the theory is just a formal system, but after we understand it it is related to our own experience. When we see that it matches experience and satisfies some empirical tests, we know that we really understood! As all students of physics know, this may be quite some time after we heard the details of the theory and checked the logical consistency of the formal stuff we try to understand. We can solve the exercises long before we have a good feeling for the theory, i.e., a good map between theory and experience.

This is the generic situation of a theory without an interpretation problem - the interpretation is essentially forced upon us by the structure of the theory, no matter how things are named. The naming only simplifies the process of understanding.

But wait...

When Dnikeded compared his solution of the exercise with that of his friend Rotnac, he noticed that the latter had another way of interpreting Onaep. He had also played with the statements in Onaep's riddle and associated it with marbles in his pocket.He linked changing Z to ZI to putting a new marble into the pocket. Starting with the empty pocket that contained no marble, he got the correspondence I=0, II=1,III=2,III=3, etc.. Both Dnikeded and Rotnac tried to figure out who made an error and whose interpretation was defective. But they couldn't find one. So they went to Onaep, asking for his judgment. Onaep declared both interpretations to be valid.

Indeed, the modern concept of natural numbers (based on the Peano axioms) exists in two forms, and the two different traditions have two different interpretations, depending on whether they call 0 a natural numbers (e.g., friends of C++ and set theorists) or whether they don't (e.g., friends of Matlab and all before Cantor). I belong to the second category and believe that 0 is an unnatural number since I have never seen someone count 0,1,2,3..., and it took ages to discover 0 - and many more centuries to declare it natural.

The two interpretations are related by the fact that ##x\to x+1## is an isomorphism between the two. (As Peter Donis mentioned in his reply #179, one could similar start with any number ##a## and count from there, corresponding to an isomorphism ##x\to x+a##.) This is analogous to the interpretation of classical mechanics, which is unique only up to the choice of an orthonormal coordinate system. In the latter case, a rigid motion provides the necessary isomorphism.
 
Last edited:
  • Like
Likes Auto-Didact
  • #179
A. Neumaier said:
I already gave in #153 and #167 the example of the axioms for natural numbers (plus a little finite set theory for the Cartesian product), and in #151 and #173 that of projective planes.

I was talking about an example using QM, since interpretations of QM is the topic of this thread.

A. Neumaier said:
the interpretation is essentially forced upon us by the structure of the theory

Except that, as you say, there are two interpretations that are consistent with the theory (the one starting with 0, and the one starting with 1).

And even that doesn't exhaust the possibilities: you can pick any integer (positive, negative, or zero) you like, and adopt it as the starting "natural number", and you will satisfy all of the axioms. In other words, there are an infinite number of possible isomorphisms to some "canonical" set of natural numbers (say the ones starting with 1, since those are the ones you say you prefer), each of the form ##x \rightarrow x + a##, with ##a## being any integer.

In other words, while it is certainly possible to discover a mapping between a mathematical model and experience in the process of understanding the mathematical model, there is no guarantee that the mapping you discover is the only mapping. So people both using the same mathematical model can still end up making different predictions, if they have not taken steps to ensure that they're both using the same mapping as well. For example, if you use 1-based counting and I use 0-based counting, we're likely to get confused trying to match up our counts if we don't realize the difference and make appropriate adjustments.

With all that said, I still come back to what I said in my previous post: none of this has anything to do with interpretations of QM, because different interpretations of QM all agree about which mapping between the mathematical model and experiment to use. (This mapping still depends on the specific experiment: in my previous post I gave the example of the same qubit mathematical model applying to both electron spins and photon polarizations.) The different QM interpretations only disagree about what story to tell about "what is going on behind the scenes", so to speak; but those stories have nothing to do with matching up the mathematical model to experiment. So I don't see how any of what's been said about matching up the mathematical model to experiment/experience has anything to do with QM interpretations, which is the topic of this thread.
 
  • #180
PeterDonis said:
And even that doesn't exhaust the possibilities: you can pick any integer (positive, negative, or zero) you like, and adopt it as the starting "natural number", and you will satisfy all of the axioms. In other words, there are an infinite number of possible isomorphisms to some "canonical" set of natural numbers (say the ones starting with 1, since those are the ones you say you prefer), each of the form ##x \rightarrow x + a##, with ##a## being any integer.In other words, while it is certainly possible to discover a mapping between a mathematical model and experience in the process of understanding the mathematical model, there is no guarantee that the mapping you discover is the only mapping. So people both using the same mathematical model can still end up making different predictions, if they have not taken steps to ensure that they're both using the same mapping as well. For example, if you use 1-based counting and I use 0-based counting, we're likely to get confused trying to match up our counts if we don't realize the difference and make appropriate adjustments.
Yes, and as I had said in my last post, exactly the same happens in classical mechanics, which can be mapped only up to a rigid motion.
PeterDonis said:
With all that said, I still come back to what I said in my previous post: none of this has anything to do with interpretations of QM, because different interpretations of QM all agree about which mapping between the mathematical model and experiment to use.
No, they don't. Please give me a reference to an online article or well-known textbook that gives this unique ''mapping between the mathematical model and experiment''. And I'll show (like Suppes did) that it says almost nothing about real experiments. Theoretical sources only say something about relations to abstract buzzwords like ''observable'' and ''measure'' about whose precise meaning the interpretations (and the experimental practice) widely differ. There are many thousands of experiments to be covered, the term ''experiment'' in your comment is something very theoretical...

PeterDonis said:
It might be helpful if you would give a concrete example. Perhaps this would qualify as one: the "theory" (for your meaning of that term) is the standard quantum theory of a qubit. Two different possible "mappings" are: interpreting the qubit theory as describing the spin of an electron in a Stern-Gerlach experiment; interpreting the qubit theory as describing the polarization of a photon passing through a beam splitter. Is that the sort of thing you have in mind?
Well this is a meta setting in which the real world is replaced by a theoretical world, in which the qubit has two different interpretations. No, this was not what I had meant.

In your context, consider the qubit first discussed by Weyl 1927 in the context of a Stern-Gerlach-like experiment. [H. Weyl, Quantenmechanik und Gruppentheorie, Z. Phys. 46 (1927), 1-46.] The title of the first part is ''The meaning of the repesentation of physical quantities through Hermitian operators'' (''Bedeutung der Repräsentation von physiksalischen Größen durch Hermitesche Formen''). It discusses among others the paradox that the angular momentum in the three coordinate axes can only take the values ##\pm 1## (in units of ##\hbar/2##) but the angular momentum in other directions, too - which is inconsistent with the algebra. This shows the need for proper interpretation. He then introduces the ensemble interpretation (ensemble = ''Schwarm'') in pure and mixed states, and resolves the paradox in the well-known statistical way. (Thus - @bhobba, @atyy - the ensemble interpretation starts at least with Weyl 1927, and not only with Ballentine 1970!)

The map from theory to experiment is stated not as part of the theory but as ''the assumption by Goudsmit and Uhlenbeck, which has proven itself well'' [Goudsmit, S. and Uhlenbeck, G.E., 1926. Die Kopplungsmöglichkeiten der Quantenvektoren I am Atom. Z. Physik A 35 (1926), 618-625.] - but for electrons, and it was formulated in purely spectroscopic terms. Weyl applies it to a Stern-Gerlach-like experiment (with electrons in place of the original silver atoms).

Why was he allowed to do that? One map from theory to experiment was given through spectroscopy, another map was given through Stern-Gerlach for silver atoms. Fron these, Weyl created by analogy (not by theory) a third map for electrons in the Stern-Gerlach-like experiment. Thus the map changed.

Moreover, there are many more experiments related to angular momentum, and no quantum theory book I know points out how these are connected to the theory.

Nobel prizes are given to new ways of devising useful measurements at unprecedented accuracy, but nobody ever has suggested that each time the theory needs to be amended by mapping its mathematics to these new experimental possibilities. This mapping is described instead in papers published in experimental physics journals!

This is very typical. A theory book gives informally (not as part of the theory, since different expositions of the same theory use different examples) some key experiments in a very simplified description and relates these in an exemplary manner to theory, in order to create suggestive relations between theory and experiment. These are of the same nature as the (according to @Dale ''highly suggestive'') hints to reality given in a purely mathematical theory to make it intelligible. And they have precisely the same limitations that Dale pointed out:
Dale said:
The mapping to experiment is separate from the mathematical framework itself, even when the names are highly suggestive.
By the same token, the mapping to experiment is separate from the physical theory itself. No theory book gives more than highly suggestive names and pointers to experiments. The connection to real experiments must be made by the experimenter who understands the difference between a real experiment and a symbolic toy demonstration.
 
Last edited:

Similar threads

  • Sticky
  • · Replies 190 ·
7
Replies
190
Views
30K
  • · Replies 68 ·
3
Replies
68
Views
4K
  • · Replies 155 ·
6
Replies
155
Views
4K
  • · Replies 292 ·
10
Replies
292
Views
12K
  • · Replies 45 ·
2
Replies
45
Views
7K
  • · Replies 34 ·
2
Replies
34
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 20 ·
Replies
20
Views
2K
  • · Replies 39 ·
2
Replies
39
Views
4K