A Jürg Fröhlich on the deeper meaning of Quantum Mechanics

  • #101
A. Neumaier said:
Nothing here is about causes. Bayes' formula just relates two different conditional probabilities.
Well, yes, it has to do with the use we make of it, because otherwise it's only syntax.

To make sense you need semantics and therefore an interpretation/model.

/Patrick
 
Physics news on Phys.org
  • #102
microsansfil said:
Well, yes, it has to do with the use we make of it, because otherwise it's only syntax.

To make sense you need semantics and therefore an interpretation/model.
Yes, but no semantics requires that one of ##A## and ##B## is the cause of the other. They can be two arbitrary statements. Taking the relative frequency of pregnancies as ##A## and the number of storks in the area as ##B## is valid semantics.
 
  • Like
Likes Auto-Didact
  • #103
https://bayes.wustl.edu/etj/articles/cmystery.pdf
The idea is that a conditional probability, depending on the context, can be used to express physical causality.

In the paper the example of BERNOULLI'S URN REVISITED (page 13) : In (18) the probability on the right expresses a physical causation, that on the left only an inference.

A conditional probability can, depending on the context, express a "physical causality" or an inference.

/Patrick
 
  • #104
microsansfil said:
https://bayes.wustl.edu/etj/articles/cmystery.pdf
The idea is that a conditional probability, depending on the context, can be used to express physical causality.

In the paper the example of BERNOULLI'S URN REVISITED (page 13) : In (18) the probability on the right expresses a physical causation, that on the left only an inference.

A conditional probability can, depending on the context, express a "physical causality" or an inference.
But only if you know already the causal connection. From probabilities alone one can never deduce a causal relation, only correlations.
 
  • Like
Likes Auto-Didact
  • #105
DarMM said:
Many books on the philosophy of Probability theory delve into the details, but the Wikipedia link here has the most pertinent details:
https://en.wikipedia.org/wiki/Bayesian_probability#Objective_and_subjective_Bayesian_probabilities
It's mostly about prior probabilities. Objective Bayesians build off of Cox's theorem and Subjective Bayesians off of DeFinetti's work.

The best book I think on the Objective outlook is E.T. Jaynes's "Probability Theory: The Logic of Science"

For the Subjective Bayesian outlook I like J. Kadane's "Principles of Uncertainty" or DeFinetti's "Theory of Probability: A Critical Introductory Treatment"
Surely probability theory is no more a part of the foundations of QT than the Fourier transform ?
They are both in the toolbox of many theories, including classical mechanics.
 
  • #106
Well, I guess there's a lot to find problematic about Fourier transforms for philosophers. I'd not be surprised that we could get a discussion about Fourier transformation that gets over 100 postings long.

Just to trigger a heated debate: What's better, Fourier or Laplace trafos (it's nearly as important as the war-like debates about emacs vs. vi ;-)).

SCNR.
 
  • Haha
Likes DarMM
  • #107
Mentz114 said:
Surely probability theory is no more a part of the foundations of QT than the Fourier transform ?
It has very minor effects like how exactly you think of the quantum state, or what you think is going on in quantum tomography. Not of any practical importance.

In post #80 I wasn't concerned with what one thinks of probability theory, but more that many of these issues (Wigner's friend, What is a measurement, etc) are nothing more than an issue with having probability theory in a fundamental theory.
 
  • Informative
Likes Mentz114
  • #108
vanhees71 said:
Just to trigger a heated debate: What's better, Fourier or Laplace trafos (it's nearly as important as the war-like debates about emacs vs. vi ;-)).
You're not one of those Laplacists are you? :eek:

Mentors can @vanhees71 be banned for corrupting the forum?
 
  • #109
No, don't worry, I'm usually using the Fourier transformation :biggrin:
 
  • #110
DarMM said:
The only problem is that quantum mechanics involves non-classical correlations. That is correlations outside the polytope given by assuming that your variables all belong in a single sample space. You can show (Kochen-Specker, Colbeck-Renner, etc) that theories with correlations outside of this polytope by necessity lack a dynamical account for their outcomes or correlations.
I just thought I'd put an example of the proof of this here if people enjoy it. Consider ##X## and ##Z## polarization measurements on two particles. All measurements have outcomes ##\{0,1\}##. I'll call the observers ##A## and ##B##. Imagine we find they are correlated as follows:

##X_A####Z_A##
##X_B####=####=##
##Z_B####=####\neq##

i.e. if they both perform an ##X## measurement the results will be equal.

Now consider the chance that ##A## obtains ##0## when they measure ##X_A##:
$$p\left(0|X_A\right)$$
From no-signalling this doesn't depend on the ##B## measurement, so we'll just take it to be ##X_B##, then
$$p\left(0|X_A\right) = p\left(00|X_A X_B\right) + p\left(01|X_A X_B\right)$$
Of course the second term is zero so:
$$p\left(0|X_A\right) = p\left(00|X_A X_B\right)$$
Since this is purely based on the correlation array it doesn't matter if we include any other arbitrary collection of events ##e## that occurred prior to the measurements:
$$p\left(0|X_A , e\right) = p\left(00|X_A X_B , e\right)$$
If we then focus on the chance for an ##X_B## measurement to produce zero we get a similar result:
$$p\left(0|X_B , e\right) = p\left(00|X_A X_B , e\right)$$
And thus we have:
$$p\left(0|X_A , e\right) - p\left(0|X_B , e\right) = 0$$
Iterating through a few different combinations of measurements we get three more equations like this for other sets of outcomes, thus in total we have:
<br /> p\left(0|X_A , e\right) - p\left(0|X_B , e\right) = 0\\<br /> p\left(0|X_B , e\right) - p\left(0|Z_A , e\right) = 0\\<br /> p\left(0|Z_A , e\right) - p\left(0|Z_B , e\right) = 0\\<br /> p\left(0|Z_B , e\right) - p\left(1|X_A , e\right) = 0<br />

These cancel off against each other to give us:
$$p\left(0|X_A , e\right) - p\left(1|X_A , e\right) = 0$$
Since we have ##p\left(1|X_A , e\right) = 1 - p\left(0|X_A , e\right) ## this gives us:
$$p\left(0|X_A , e\right) = \frac{1}{2}$$
So the outcome of an ##X_A## measurement cannot be deterministic. With this you can show none of the other outcomes can be deterministic either.

The correlations I used here are supra-quantum, i.e. stronger than those in quantum mechanics. Ekert and Renner proved that the same holds true in QM (https://www.nature.com/articles/nature13132?draft=journal, note they use information theoretic language so phrase it in terms of privacy).

The correlations are too strong for individual outcomes to be deterministic.

If you try the same with classical correlations the equations come out underdetermined thus the solutions have a free parameter ##\lambda## which can be adjusted to give deterministic solutions for the correlations.
 
  • Like
Likes dextercioby
  • #111
stevendaryl said:
...That means that after a measurement, the device is in a definite "pointer state". On the other hand, if you treat the measuring device (plus observer plus the environment plus whatever else is involved) as a quantum mechanical system that evolves under unitary evolution, then unless the observable being measured initially has a definite value, then after the measurement, the measuring device (plus observer, etc) will NOT be in a definite pointer state.

This is just a contradiction...
As QM is formulated and corroborated, the observers is mandatory and implicit - always. So anyone trying to solve the foundational problems of QM by removing the observer, to me appears not to appreciate the heart of a measurement theory.

So if the the pointer state is in a definitive state - relative to the original observer (the measurement device if you wish). The fact that this can be in a non-definitive state, relative to another observer is not a contradition per see, right?

Contradiction would appear only when they "communicate" their view, and then we have a physical interaction between them. But if they two observers are generalized beyond the "classical background" that Bohr relied in, the "contradiction" may well rather manifest itself as an interaction terms between the observers. This seems to me tha natural resolution to this. So rather than getting rid of observes, i think what we need to do is to deepend the abstraction of observers to extent beyond classical ontologies.

/Fredrik
 
  • #112
A. Neumaier said:
My point is that theory is never about subjective approximations to objective matters. It is about what is objective. How humans, robots, or automatic experiments handle it is a matter of psychology, artificial intelligence, and experimental planning, respectively, not of the theory.
I think these things are a divisior among researchers in this area and its interesting to highlight. I think your view, is stringent and if it is possible, the most accurate one.

But I belong to those that think that absolute objectivity is an illusion. It can not be attained, at best its an attractor. Actually not unlike human science even. Therefore, using this as a hard constraint may be misguiding when we are building a machinerey for optimal inference, beuause i think that in order to see how rules are formed, you need to break them.

So in your definition I belong to the subjective probability camp, but unlike your second scentence I do not mix in human cognition. The subjectivity here does not mean in any significant sense that science is subjective human-to-human. All it supposedly means, is that best inferred physical states encoded by some kind of statevector, are dependent on the physical subsystem making the inference.

But this stance to foudnational research seem to me to me in minority and thus under-developed beucase it creates a lot of extra difficulties, therefore most physicists seem to not like this. It is my impression.

The main difficulty is how to explain the de facto objectivity we all agree upon, even in despite minor disagreements, based on a foundation that is fundamentally interacting subjective views? This is a serious problem, sufficient to reject this unless you actually see a chance to solve it.

/Fredrik
 
  • Like
Likes Auto-Didact
  • #113
vanhees71 said:
Well, I guess there's a lot to find problematic about Fourier transforms for philosophers. I'd not be surprised that we could get a discussion about Fourier transformation that gets over 100 postings long.
Why so much hatred for philosophers? What did they do to you?

Bertrand Russell said:
The value of philosophy is, in fact, to be sought largely in its very uncertainty. The man who has no tincture of philosophy goes through life imprisoned in the prejudices derived from common sense, from the habitual beliefs of his age or his nation, and from convictions which have grown up in his mind without the co-operation or consent of his deliberate reason. To such a man the world tends to become definite, finite, obvious; common objects rouse no questions, and unfamiliar possibilities are contemptuously rejected. As soon as we begin to philosophize, on the contrary, we find, as we saw in our opening chapters, that even the most everyday things lead to problems to which only very incomplete answers can be given. Philosophy, though unable to tell us with certainty what is the true answer to the doubts which it raises, is able to suggest many possibilities which enlarge our thoughts and free them from the tyranny of custom. Thus, while diminishing our feeling of certainty as to what things are, it greatly increases our knowledge as to what they may be; it removes the somewhat arrogant dogmatism of those who have never traveled into the region of liberating doubt, and it keeps alive our sense of wonder by showing familiar things in an unfamiliar aspect.

Fourier transforms are easy to understand in the context of finite group theory: https://link.springer.com/chapter/10.1007/3-540-45878-6_8

/ Patrick
 
  • #114
DarMM said:
The only problem is that quantum mechanics involves non-classical correlations. That is correlations outside the polytope given by assuming that your variables all belong in a single sample space. You can show (Kochen-Specker, Colbeck-Renner, etc) that theories with correlations outside of this polytope by necessity lack a dynamical account for their outcomes or correlations.
I don't consider this a problem. To the contrary, this most surprising consequence of the quantum formalism, has been observed with astonishing significance and accuracy over the last decades in the wake of Bell's seminal paper. It's not a problem but a feature of QT to have predicted this phenomenon accurately!
 
  • #115
microsansfil said:
Why so much hatred for philosophers? What did they do to you?
Why hatred? I'm just doubting the usefulness of philosophy in the natural sciences, no more no less.
 
  • #116
vanhees71 said:
I don't consider this a problem. To the contrary, this most surprising consequence of the quantum formalism, has been observed with astonishing significance and accuracy over the last decades in the wake of Bell's seminal paper. It's not a problem but a feature of QT to have predicted this phenomenon accurately!
That quote was from #80 where the context was it's a problem for "completions" of quantum mechanics, not for QM itself.
 
  • Like
Likes vanhees71
  • #117
My problem is to see the necessity for "completions", as long as there are no observations hinting at an incompleteness of QT. The problem I have with understanding, why some people are so obsessed with purely philosophical issues that they think the QT is somehow incomplete. The only incompleteness I'm aware of is the pressing issue of the missing quantum theory of gravity (and, in view common of the geometrical interpretation of GR, probably also spacetime).
 
  • Like
Likes DarMM
  • #118
vanhees71 said:
My problem is to see the necessity for "completions", as long as there are no observations hinting at an incompleteness of QT. The problem I have with understanding, why some people are so obsessed with purely philosophical issues that they think the QT is somehow incomplete. The only incompleteness I'm aware of is the pressing issue of the missing quantum theory of gravity (and, in view common of the geometrical interpretation of GR, probably also spacetime).
All these 'purely philosophical issues' also happen to be mathematical issues i.e. of interest to some mathematicians who do not care anything about physics except as a guide to understanding better and broadening the theory of mathematics itself; this makes the desire to answer foundational issues in physics a strictly scientific endeavour, whether or not there is any necessity for such discussions from experiment (NB: this is true whether or not mathematics is seen as a science).

Case in point: both string theory and twistor theory cannot be called 'physics' by any stretch of the imagination, yet no one questions the fruits they offer indirectly to physical theory. Purely mathematical reformulations and extensions of such models may one day lead to the mathematical discovery of a new theory which will turn out to be physics; foundations of QM research has similar intentions.
 
  • #119
Fra said:
As QM is formulated and corroborated, the observers is mandatory and implicit - always. So anyone trying to solve the foundational problems of QM by removing the observer, to me appears not to appreciate the heart of a measurement theory.

Well, I think that's completely backwards. To solve the measurement problem MEANS to remove the observer as a fundamental element of QM.
 
  • #120
stevendaryl said:
Well, I think that's completely backwards. To solve the measurement problem MEANS to remove the observer as a fundamental element of QM.
I have a suspicion that orthodox QM only works experimentally at all because it is de facto a relational theory, possibly the first of its kind i.e. within natural science. If this is true, then there is the possibility that there will never be a reductionistic understanding possible, not even in principle, i.e. the anti-realists are correct.

This can be interpreted in two ways: the relational nature of QM is either fundamental or is itself an approximation to some underlying non-relational theory. This dichotomy can only be answered by remodelling the foundations of physics using branches of pure and applied mathematics which physicists - especially experimentalists - usually have no experience with whatsoever.

Note however that relational theories aren't new in science at all; they are only new in natural science. There are extremely advanced mathematical models in the social sciences cooked up by physicists and mathematicians who decided to do some freelance work in social sciences; these theories all tend to be applied models within the dynamical systems approach to science also known as complexity theory.
 
  • #121
stevendaryl said:
To solve the measurement problem MEANS to remove the observer as a fundamental element of QM.
Remove the observer is a form of reification. however, the physical concepts devised by the observer to build objectification will remain.

/Patrick
 
  • #122
Auto-Didact said:
Case in point: both string theory and twistor theory cannot be called 'physics' by any stretch of the imagination, yet no one questions the fruits they offer indirectly to physical theory. Purely mathematical reformulations and extensions of such models may one day lead to the mathematical discovery of a new theory which will turn out to be physics; foundations of QM research has similar intentions.
Fine, indeed string theory & Co. are no physics but maybe interesting mathematics, but what these discussions about the foundations of QT have to do with math I don't see. Do you have an example?
 
  • #123
vanhees71 said:
Fine, indeed string theory & Co. are no physics but maybe interesting mathematics, but what these discussions about the foundations of QT have to do with math I don't see. Do you have an example?
The point of such discussions is to lead to a premise, importantly a premise wherein a consensus is reached by disagreeing partcipants (preferably experts in all the possible kinds of views), which can be subsequently mathematicized into a new theory. Of course, you could argue that guessing premises out of thin air and then mathematicizing can be done randomly but that is usually not all that productive as Feynman adresses here:

As Feynman points out, theoretical physics is difficult because not just any dumb guess will lead to a premise which could result in an actually interesting - to other experts - mathematical model, let alone correct physical theory; what this means is that the practice of theoretical physics is an art form and that therefore there simply are theoreticians who are just better at constructing new successful theories than others simply because given similar necessary mathematical skills one is simply more creative than the other.

Historically many of those better theoreticians (e.g. Newton, Leibniz, Einstein, Poincaré, Bohr, Feynman) got their creative guesses from foundational discussions or reading which they distilled to a single conceptual notion which they could analyse mathematically and invent new mathematics in the process. (NB: Feynman for all his criticisms of philosophers was actually a very avid reader (especially pre-Manhattan project) reading among other things all of the foundational issues of his day, including Poincaré's work on the philosophy of science and all of the classics in physics and beyond including Descartes, Newton and Leibniz. Everything that he read and understood he did so in a truly foundational sense; this might have been the true secret to his genius).

The process of doing actual science, especially fundamental science, is an extremely messy endeavor and practically never can be characterized by a straight path from A to B. In fact any science which can be characterized in such a manner is almost always completely trivial or even engineering and not really science.

In any case, examples from the present:
- Bohmian mechanics, which still lacks a relativistic completion; this makes it as a mathematical object far more interesting than orthodox QM because orthodox QM has already been milked to death while the construction of such an explicitly nonlocal relativistic theory may lead to a revolution in mathematics.
- the relational interpretation of QM which has lead among other things to the construction of LQG by Ashtekar et al.
- the construction of the non-commutative geometry programme by Connes et al.
- causal dynamical theories heavily dependent upon notions from discrete pure mathematics and intrinsically incompatible with continuous pure mathematics.
- several QM collapse theories which are currently undergoing experimental falsification: there is actually the possibility that one of these will come out successful making QM a limiting case of one of these theories.
 
  • Like
Likes eloheim, Fra and julcab12
  • #124
stevendaryl said:
Well, I think that's completely backwards. To solve the measurement problem MEANS to remove the observer as a fundamental element of QM.
I think this is how many see it, so do not have to explain your position. Your view is also consistent which how most physicists view and understand observer equivalance - as a kind of observer invariance or symmetry, which resonates also with how standard model is built.

Only probolem is that this has big problems when trying to incorporate gravity and understand unification without running into fine tuning problems.

For me observer equivalence is not a symmetry, its conceptually more like a democracy, where the symmetries is a result of negotiation. But we that think along these lines are in minority and i do not have much well written papers to refer to.

/Fredrik
 
  • #125
Auto-Didact said:
The process of doing actual science, especially fundamental science, is an extremely messy endeavor and practically never can be characterized by a straight path from A to B. In fact any science which can be characterized in such a manner is almost always completely trivial or even engineering and not really science.
I agree. This messy endeavor is also what people find annoying and try to hide. Popper tried to straighten the scientific process, by focusing on the cleaner corroboration and falsification steps, and sweep the creative process under the rug.

But original creativity in science, lies in the hypothesis generation, because as noted in post #123, while hypothesis generation in a sense is as random as natural variation in evolution, it needs to be guided and have some stability. This part of the scientific process is important. By now, everyone understands the corroboration/falsification, but few want to think about the hypothesis generation because its non-deductive nature is simply embarassing. And Popper explicitly wanted to cure it, but failed, he just managed to hide it a bit.

Its maybe better for the image of "hardcore deductivists" to dismiss is as philosophy, not relevant to science :-) but yet I am sure in the brain of every single scientist there are plenty of embarassing processes you want to keep to yourself, and only published the cleaned up stuff.

/Fredrik
 
  • Like
Likes eloheim and Auto-Didact
  • #126
Well, reading a lot about the history of physics, I come to the opposite conclusion: The great "heros" of theoretical physics all have based the great findings on a solid empirical foundation. E.g., Newton's theory of gravitation was firmly based on the knowledge of Kepler's Laws, Maxwell's theory of electromagnetism on Faraday's comprehensive experimental findings and the field concept (derived by Faraday from his experiment). Einstein's SRT based on Maxwell's equations and the fact that they are not Galilei invariant, as well as on the fact that the corresponding interpretations concerning symmetries under boosts and the indepencence of the speed of light on the movement of the source and detector. GR was based on the empirical fact of the (weak and strong) equivalence principle.

The same holds for QT: It has been discovered to resolve some "coulds on the horizon of theoretical physics" at the time. First of all there was black-body radiation, for which the thermostatistics of classical electromagnetic theory lead to the utterly wrong result of an infinite energy density (UV catastrophe) with the solution found by Planck from evaluation of high-precision data from the PTR (Rubens, Kurlbaum et al). Also Einstein's work on the photoelectric effect, though nowadays known to be incorrect, was based on the empirical input, particularly the independence of the electron's kinetic energy on the intensity of the em. field and the quasi instantaneous onset of the effect when irradiating the plate. Bohr's, though nowadays known to be incorrect, atomic model was based on Rutherford et al's finding about scattering of ##\alpha## particles on a gold foil etc. I could go on and on.

The only example of a profound idea about physics from philosophical issues or apparent problems of QT is Bell's work on entanglement. His merit, however, is to have brought the issue from philosophical speculations a la EPR and Bohr's answers to it to a clear physical implication of an alternative class of theories (deterministic local hidden-variable theories) conratdicting QT, which could be experimentally tested. We know the result: QT is correct, but not any determinisic local hidden-variable theory. That's why QT survived all the quibbles physicists and philosophers have with it: It describes the empirical facts as accurately as no other theory can so far. The problem of those who think there's a problem thus is in fact that there is no problem with the foundations.

The only open problem is the lack of a quantum-gravity theory, and from the experience summarized above, I fear that without some empirical input to guide a clever theorist to another ingenious new idea, there'll be no chance to find such a theory. On the other hand with new observational tools at hand (gravitational wave detection and multi-messenger astronomy seem to be the most promising), maybe such an observation may become possible in the not too far distant future.
 
  • #127
vanhees71 said:
Well, reading a lot about the history of physics, I come to the opposite conclusion: The great "heros" of theoretical physics all have based the great findings on a solid empirical foundation. E.g., Newton's theory of gravitation was firmly based on the knowledge of Kepler's Laws, Maxwell's theory of electromagnetism on Faraday's comprehensive experimental findings and the field concept (derived by Faraday from his experiment). Einstein's SRT based on Maxwell's equations and the fact that they are not Galilei invariant, as well as on the fact that the corresponding interpretations concerning symmetries under boosts and the indepencence of the speed of light on the movement of the source and detector. GR was based on the empirical fact of the (weak and strong) equivalence principle.

I don't actually agree with those examples as illustrating what you say they are illustrating. Newton and Einstein were very much influenced by conceptual matters. For empirical purposes, there is no need for General Relativity, for example. Or Special Relativity, for that matter. You can just (as is done in the post-Newtonian expansion) assume that physics is approximately described by Newtonian mechanics, and then include higher-order non-Newtonian correction terms in a power series in ##\frac{1}{c^2}##. Let the terms in that expansion be determined experimentally. There is no need for a theory such as General Relativity that attempts to understand the differences in terms of a concept of curved spacetime.

For Einstein and Newton, and I would say all great physicists, it was important to come up with concepts for understanding how the world works. Coming up with a numerical algorithm for predicting the results of observations was not the goal.

I really do believe that there is a stark contrast between what some scientists claim the point of science is, and what actually motivates people to become scientists in the first place, and what motivates people to care about science.
 
  • Like
Likes eloheim, kith and Auto-Didact
  • #128
vanhees71 said:
The only open problem is the lack of a quantum-gravity theory

You mean, the only one that you are interested in?
 
  • #129
stevendaryl said:
For Einstein and Newton, and I would say all great physicists, it was important to come up with concepts for understanding how the world works. Coming up with a numerical algorithm for predicting the results of observations was not the goal.

I think it's completely wrong to say that the goal of science is to make falsifiable predictions. The goal is understanding the world. Falsifiable predictions are a way of testing that understanding.
 
  • Like
Likes eloheim and Auto-Didact
  • #130
vanhees71 said:
Well, reading a lot about the history of physics, I come to the opposite conclusion: The great "heros" of theoretical physics all have based the great findings on a solid empirical foundation. E.g., Newton's theory of gravitation was firmly based on the knowledge of Kepler's Laws, Maxwell's theory of electromagnetism on Faraday's comprehensive experimental findings and the field concept (derived by Faraday from his experiment). Einstein's SRT based on Maxwell's equations and the fact that they are not Galilei invariant, as well as on the fact that the corresponding interpretations concerning symmetries under boosts and the indepencence of the speed of light on the movement of the source and detector. GR was based on the empirical fact of the (weak and strong) equivalence principle.
You seem to take a very experimental view of science, when actually especially in physics it is the mathematics which often predicted the experiment. Newton and his invention of calculus in order to study mechanics is the ultimate exemplar of this; there were many before him who had Kepler's data, but none who also had his creative insight and mathematical skill to actually invent a qualitatively different method - in fact an entire new form of mathematics - in order to be able to frame his hypotheses instead of merely fitting some data points by experimental analysis which any learned fool was capable of.

Learning new mathematics and inventing new mathematics for framing conceptual premises are two things of an entirely different order which is today not nearly enough appreciated by many physicists, who often tend to severely underestimate the skill, creativity and insight required to invent new mathematics, merely because they themselves were able to learn the centuries-long-perfected-form of that subject in college or high school by being spoonfed from a book/teacher.

Moreover, many modern mathematicians and scientists tend to mistake what is heritage for what is history. Recalling the words of Feynman, there is sufficient reason to be careful to distinguish ones expertise in a subject from ones expertise on the history of that subject:
Feynman said:
What I have just outlined is what I call a ‘physicist’s history of physics’, which is never correct… a sort of conventionalized myth-story that the physicist tell to their students, and those students tell to their students, and it is not necessarily related to actual historical development, which I do not really know!
Stenlund rejoins in this view, stating:
Stenlund said:
The normal interest in history of mathematics (among mathematicians who write history of mathematics) is interest in our mathematical heritage. This interest therefore tends to be conditioned by the contemporary situation and is not always an interest in what actually happened in mathematics of the past regardless of the contemporary situation. Only history in the latter sense deserves to be called history.##^1##
But history and heritage are often confused and one consequence of this kind of confusion is that the transformation of mathematics at the beginning of modern times is concealed. Features of modern mathematics are projected upon mathematics of the past, and the deep contrasts between ancient and modern mathematics are concealed. As a consequence, the nature of modern mathematics as symbolic mathematics is not understood as the new beginning of mathematics that it was.

1. Grattan-Guiness, I., 2004, The mathematics of the past: distinguishing its history from our heritage. Historia Mathematica, vol. 31, pp. 163-185.
There actually was a time when I was as ardent about the usefulness of the work of historians and philosophers for physics as you seem to be, even going as far as to publically belittle them to their faces, but I don't have such strong views anymore. In fact, now I am quite impartial to the matter in its full generality.

This change of view happened after I had actually gained some experience in doing the type of research that they do, whereafter I realized that my 'physicists understanding' and criticism of the practice of history and philosophy was practically a strawman attack, almost completely wrong about the nature of their work due to wrong assumptions picked up from physics and science more generally; this is why I am quite sure that other physicists, especially those with less actual academic research experience in history and philosophy, are wrong when they criticize historians and philosophers.

I think we are all the better when historians and philosophers intercede and try to contribute to the history/philosophy of science; it keeps us from segregating too far from each other into separate domains and also the dialogue keeps both parties sharp. I try to keep an open point of view and have learned at the least to take pleasure in reading such literature from any of the sides, especially when the debate gets fierce; sometimes, I hope that such old texts will be able to reveal to me somethings which have become lost over time; I am sure that at least Einstein and Feynman did the same, since they are both on record of having said so.
 
  • Like
Likes eloheim and Tendex
  • #131
stevendaryl said:
For empirical purposes, there is no need for General Relativity, for example. Or Special Relativity, for that matter. You can just (as is done in the post-Newtonian expansion) assume that physics is approximately described by Newtonian mechanics, and then include higher-order non-Newtonian correction terms in a power series in 1c21c2\frac{1}{c^2}. Let the terms in that expansion be determined experimentally. There is no need for a theory such as General Relativity that attempts to understand the differences in terms of a concept of curved spacetime.

For Einstein and Newton, and I would say all great physicists, it was important to come up with concepts for understanding how the world works. Coming up with a numerical algorithm for predicting the results of observations was not the goal.

I really do believe that there is a stark contrast between what some scientists claim the point of science is, and what actually motivates people to become scientists in the first place, and what motivates people to care about science.
I could not agree more; I want to understand the world, this is why I became a scientist and that is the same reason that countless colleagues and students have given to me as their drive for going into and/or staying in science, especially when speaking in a non-professional setting.

I would like to end by giving Feynman's view at the end of his Messenger Lectures (from 49:10 until the end):
 
  • #132
stevendaryl said:
I don't actually agree with those examples as illustrating what you say they are illustrating. Newton and Einstein were very much influenced by conceptual matters. For empirical purposes, there is no need for General Relativity, for example. Or Special Relativity, for that matter. You can just (as is done in the post-Newtonian expansion) assume that physics is approximately described by Newtonian mechanics, and then include higher-order non-Newtonian correction terms in a power series in ##\frac{1}{c^2}##. Let the terms in that expansion be determined experimentally. There is no need for a theory such as General Relativity that attempts to understand the differences in terms of a concept of curved spacetime.

For Einstein and Newton, and I would say all great physicists, it was important to come up with concepts for understanding how the world works. Coming up with a numerical algorithm for predicting the results of observations was not the goal.

I really do believe that there is a stark contrast between what some scientists claim the point of science is, and what actually motivates people to become scientists in the first place, and what motivates people to care about science.
There's no empirical need for General Relativity or Special Relativity? Are you kidding?

Of course, theory is about to come up with concepts to describe nature, but the art is to find the right concepts, and this doesn't work without solid foundation in emprical facts. Einstein himself didn't find much more great theories in his later years, because he lost contact with the empirical facts of his time. As soon as he was looking for some fictitious "unified classical field theory", i.e., trying to solve a problem that was not there to begin with, he didn't find any such breakthrough.
 
  • #133
stevendaryl said:
I think it's completely wrong to say that the goal of science is to make falsifiable predictions. The goal is understanding the world. Falsifiable predictions are a way of testing that understanding.
Of course, Popper's view is completely insufficient to describe how science works. There's also sometimes the case that more and more and better and better tests of an existing theory confirm it better and better, which leads to some lack of new input to solve (real physics!) problems: An example is the Standard Model of elementary particle physics, which seems not to be the "final theory" since it doesn't explain why we are here (lack of CP-violation strength to explain the matter-antimatter asymmetry in the observable universe).

Physics is always progress of a close relation between experiment and theory. There's also no recipy to find great new theories. One is for sure not successful: Looking for solutions of pseudoproblems without any solid empirical foundation for even the very existence of the problem, and that's what I think all this many words about philosophical problems of QT are.
 
  • #134
vanhees71 said:
There's no empirical need for General Relativity or Special Relativity? Are you kidding?
There certainly wasn't when Einstein invented either of them. The problems with Mercury's orbit were merely seen as purely quantitative experimental curiosities, not important observations necessarily requiring a full philosophical reconsideration of the very foundations of physics: Newtonian theory was as untouchable to physicists back then as QT is to many physicists today.

Luckily for us, Einstein did not listen to the experimentalists and theorists who believed literally that 'physics was almost complete' and rebelliously pressed on with his conceptual questions further, even going as far as to even frame his new ideas in a completely new mathematical theory.

This was certainly not without struggle or strife, even being called a heretic by many older well respected physicists who had already made their name within the scientific establishment. The insults from the physics establishment only stopped after Eddington made the experimental measurements vindicating Einstein.
 
  • #135
Well, the entire thinking about the trouble with electromagnetism in regard to the fact that Maxwell's equations was not Galileo invariant started when the experiment by Michelson and Morley found no "aether wind". That's when FitzGerald, Lorentz, Poincare, et al started to modify the theory of the aether with all kinds of hypotheses (particularly the Lorentz-FitzGerald contraction hypothesis). Einstein's ingenious insight was that all this was unnecessary if you solve the symmetry problem by changing the space-time model. As you can read in his famous paper of 1905 his motivation indeed was to cure the then common interpretation of Maxwell's theory, assuming asymmetries that in fact are not observed (sic!), by taking the symmetries for granted and modify the space-time description.

Einstein was not "heretic" concerning his SRT paper. This you can see on the fact that even the most conservative theoreticians, like Planck, almost immediately welcomed Einstein's paper and worked themselves about relativity quickly thereafter (Planck even corrected Einstein's overcomplicated mechanics of the original 1905 paper as soon as in 1906).

Where Einstein was "heretic" in the point of view of his contemporary theoretical colleagues was with respect to his "light-quantum hypothesis". Planck even excused this "heresy" when it came to hiring Einstein at Berlin in 1914, saying that a young physicst may have such heretic ideas to find something new, and indeed the "radiation problem" was not solved at the time, and Einstein is quoted of having said that he is more worried about this problem than with relativity. As it turned out, of course, Einstein was wrong with his "heretic ideas", but a full solution of this problem was only given in 1926 when QED was discovered (by Jordan and Born in one of the first papers on Matrix Mechanics, but then abandoned as "being too much" by most theoreticians, so that the whole idea had to be rediscovered a few years later by Dirac).
 
  • #136
vanhees71 said:
Physics is always progress of a close relation between experiment and theory. There's also no recipy to find great new theories. One is for sure not successful: Looking for solutions of pseudoproblems without any solid empirical foundation for even the very existence of the problem, and that's what I think all this many words about philosophical problems of QT are.

The problems with QM show that we don't understand it. It's foundation is contradictory. The goal of science is understanding the world. It's not a philosophical problem, it's a science problem.
 
  • #137
vanhees71 said:
There's no empirical need for General Relativity or Special Relativity? Are you kidding?

No, there is no need for General or Special Relativity. They are important conceptually. You can have the same predictive power in an ad hoc theory that just uses a power series with empirically determined coefficients.
 
  • #138
vanhees71 said:
Well, the entire thinking about the trouble with electromagnetism in regard to the fact that Maxwell's equations was not Galileo invariant started when the experiment by Michelson and Morley found no "aether wind". That's when FitzGerald, Lorentz, Poincare, et al started to modify the theory of the aether with all kinds of hypotheses (particularly the Lorentz-FitzGerald contraction hypothesis). Einstein's ingenious insight was that all this was unnecessary if you solve the symmetry problem by changing the space-time model.

Right. The important contribution of Einstein was conceptual. As far as equations are concerned, the Lorentz transformations were developed prior to Einstein (that's why they aren't named the Einstein transformations)
 
  • #139
In my opinion, people going on about what is and is not science are basically doing philosophy of science. And badly. At the same time that they are saying how worthless philosophy is.

Falsifiability is a way to test our understanding. It is not a goal in its own right. If people generally believed that the goal of science is to come up with falsifiable predictions, I don't think anyone would actually want to go into science. People go into science because they want to understand the world. That's also the reason that science is funded (well, applied science is funded in the hopes that good technology will come from it, but when it comes to the forefronts of physics today, such as quantum gravity, there is almost zero expectation that useful technology will come from it).
 
  • Like
Likes akvadrako and Auto-Didact
  • #140
stevendaryl said:
No, there is no need for General or Special Relativity. They are important conceptually. You can have the same predictive power in an ad hoc theory that just uses a power series with empirically determined coefficients.
I don't think that is true. Foe gravity, Einstein was motivated more by 'free-fall' which is not explained by Newtonian gravity. Free-fall was 'experimentally' observed. To explain this, meant to Einstein, producing the predictive equations. We still don't know what gravity is.
 
  • #141
stevendaryl said:
In my opinion, people going on about what is and is not science are basically doing philosophy of science. And badly. At the same time that they are saying how worthless philosophy is.

How can people limit themselves to discussing science when they don't agree what science is? Maybe there should be some reasonable guidelines about where the boundary is. And of course you're right — talking about what constitutes science is clearly in the realm of philosophy; science can't be self-defining.
 
  • #142
stevendaryl said:
No, there is no need for General or Special Relativity. They are important conceptually. You can have the same predictive power in an ad hoc theory that just uses a power series with empirically determined coefficients.
Well, that can only say someone who is inclined to philosophy rather than science, at the same time not knowing about the historical development of physics.
 
  • #143
A. Neumaier said:
DarMM said:
Basically you can still replicate Wigner's friend even under a frequentist view.
No, because both Wigner and his friend only entertain subjective approximations of the objective situation. Subjectively everything is allowed. Even logical faults are subjectively permissible (and happen in real subjects quite frequently).
I've been thinking about this more and I still don't really see what is really changed by a Frequentist view.

So in a Frequentist/Ensemble view, somebody has loads of copies of a quantum system ##S##. For a given property ##A## then ##P(A_i)## is the fraction of ensemble members with the ##i## value for ##A##. ##P(A_i|B_j)## is the fraction of the subensemble with value ##j## for ##B## who have value ##i## for ##A## etc in the Classical case.

In the quantum case this might be measuring one property ##A## after another property ##B## and you could consider the combination of the measurement of ##B## + the original preparation to be a preparation itself. The difference between ##P(A_i)## and ##P(A_i|B_j)## is then just the differences in the proportions of the ensemble with property ##A_i## given the two preparations.

In the ensemble view the new thing about QM is that ##P(A_i)## and ##P(A_i|B_j)## mesh differently depending on whether you perform the ##B## measurement or not. If you do they are related as:
$$P(A_j) = \sum_{i}P(A_j|B_i)P(B_i)$$
if you do not it is (assuming the ##B## are SIC-POVMs say just to have a simple formula):
$$P(A_j) = (d + 1)\sum_{i}P(A_j|B_i)P(B_i) - 1$$

So including the ##B## measurement is not simply the filtering to a subensemble of the original ensemble but in fact must be considered the preparation of a new ensemble. That's a major difference in the ensembles of Classical and Quantum Physics.

So with all this out of the way I don't think much different is going on in Wigner's friend. The friend performs a measurement that Wigner cannot see. Since Wigner is utterly sealed off from the friend, part of the preparation is that he cannot see the friend's measurement. Hence over an ensemble of "friend labs" and measuring all the observables he has access to, which include some superobservables related to the lab's atomic structure, the statistics he finds are best described by the superposed state:
$$\frac{1}{\sqrt{2}}\left(|\uparrow, D_{\uparrow}, L_1\rangle + |\downarrow, D_{\downarrow}, L_2\rangle\right)$$
 
  • #144
vanhees71 said:
Well, that can only say someone who is inclined to philosophy rather than science, at the same time not knowing about the historical development of physics.

Einstein was concerned with the concepts of physics, not just deriving the equations. You disagree with that?

I would say that what you're calling philosophy is actually physics. It's what people like Einstein did.
 
Last edited:
  • #145
A. Neumaier said:
No, because both Wigner and his friend only entertain subjective approximations of the objective situation. Subjectively everything is allowed. Even logical faults are subjectively permissible (and happen in real subjects quite frequently).
DarMM said:
the statistics he finds are best described
But subjective assignments of states do not need to be the best ones. Most real subjects assign suboptimal states to complex situations. To require best according to some criterion makes the observers fictitious.

Frequentists only have actual frequencies of actual (though - especially when infinite - not necessarily fully known) ensembles of actual systems, and the probabilities apply to these. Frequentists have a platonic truth about what is real, independent of the approximation they have to use when making numerical calculations. But what they know (and hence assign) are approximations of these probabilities only, based on somewhat subjective assumptions about the estimation procedure, and limited access to the data.

These only give approximate and sometimes quite erroneous states. Just like drawing a concrete circle gives only an approximation to the ideal platonic circle. Blackboard drawings of circles for illustrative purposes may even be quite poor approximations, often not even topolocially equivalent.
 
  • #146
How does that affect Wigner's friend though? Just replace the language with "he conjectures the lab is described by the superposed state" which he checks by looking at the statistics.

I mean I understand they are proposing true objective frequencies and in actual applications they only have estimates of that. However what has this got to do with or alter about Wigner's friend?
 
  • #147
DarMM said:
How does that affect Wigner's friend though? Just replace the language with "he conjectures the lab is described by the superposed state" which he checks by looking at the statistics.

I mean I understand they are proposing true objective frequencies and in actual applications they only have estimates of that. However what has this got to do with or alter about Wigner's friend?
From Wikipedia:
Wikipedia said:
An observer W observes another observer F who performs a quantum measurement on a physical system. The two observers then formulate a statement about the physical system's state after the measurement according to the laws of quantum theory.
For a frequentist, there is no updating at all, since what Wigner and/or his friend conjecture about the state is completely irrelevant.
A. Neumaier said:
All subjective updating happens outside probability theory when some subject wants to estimate the true probabilities about which the theory is.
What counts is the true state of the full system, and what one gets depends on what is taken to be the full system. The true measurement results (independent of whether and/or how reliably they are observed by anyone) can be used to approximate the true state, and their statistics can be predicted by the true state. Thus we always have one state of the maximal system considered, the reduced states of the subsystems considered, and the actual measurement statistics, which conforms up to errors of ##O(N^{-1/2})## with the prediction from these by the quantum formalism.

On the other hand, the state is altered by the measurement, in a way depending on the details of the measurement. This new state is obtained by the dynamical law of the full system (including the measurement device); by unitary evolution if the full system is isolated. One can approximate this in various ways; choosing an approximation is already subjective.

To find out this state one cannot ask observers but must perform a theoretical calculation based on assumptions made about a model for the measurement, or do quantum tomography. The latter is possible only for the tiny subsystem measured and hence only gives the corresponding reduced state. moreover, all results obtained are approximate only. The former gives under appropriate assumptions in certain approximation schemes a von Neumann collapse - but only of the tiny subsystem. To extend this to an approximate state of the full system requires additional assumptions (max. entropy and the like). In any case, the assumptions and the approximations schemes employed determine which approximation to the state of the full system is obtained, and consistency requires that the subsystem's states are the corresponding reduced states. Thus everything is determined by these assumptions and approximations schemes, and is sub/objective to the extent these assumptions and approximations schemes are considered sub/objective.

What should be added by any updating argument? It affects neither these states nor the statistics.
It affects only how people with different subjective views of the matter approximate these states by their preferred estimation procedure (which may or may not be close to some axioms about rational behavior) form the part of the statistics available to them (plus anything they get to ''know'' from hearsay or pretend to ''know'').

Thus I think that the Wigner's friend puzzle makes no sense from a frequentist perspective. Like most of the foundational puzzles, the paradoxical features come from an overidealization of the true situation.
 
  • Like
Likes Auto-Didact
  • #148
stevendaryl said:
Einstein was concerned with the concepts of physics, not just deriving the equations. You disagree with that?

I would say that what you're calling philosophy is actually physics. It's what people like Einstein did.
Of course Einstein was concerned with the concepts of physics, as any theortician is. What I find is ridiculus is the claim that observable facts have played no role in creating is special and general theory of relativity or that these theory are irrelevant for phenomenology.

Indeed Einstein in his younger years did physics and created profound new insight in statistical mechanics, relativity, and quantum theory. Alone each of the 3+1 famous papers of 1905 is worth a Nobel prize. The irony is that his Nobel cerificate is the only one I know, which contains explicitly a statement, for what Einstein has not gotten the Nobel prize, namely for his theories of relativity. Nowadays we know that this was due to philosophical reasons. Rightfully Bergson, who is the culprit in this affair, is forgotten today, but Einstein is not. In later years Einstein got caught in his philosophical prejudices, ignoring the observed facts, and has not contributed much to physics from then on. He's an pardigmatic example for the danger of philosophy in the natural sciences ;-)).
 
  • #149
A. Neumaier said:
What should be added by any updating argument? It affects neither these states nor the statistics.
It affects only how people with different subjective views of the matter approximate these states by their preferred estimation procedure (which may or may not be close to some axioms about rational behavior) form the part of the statistics available to them (plus anything they get to ''know'' from hearsay or pretend to ''know'').

Thus I think that the Wigner's friend puzzle makes no sense from a frequentist perspective. Like most of the foundational puzzles, the paradoxical features come from an overidealization of the true situation
Sorry but I still don't really understand.

Let me try something more basic.

In the typical presentation the friend models the system as being in the state ##\frac{1}{2}\left(|\uparrow\rangle + |\downarrow\rangle\right)## upon measurement and obtaining the ##\uparrow## outcome he models later experiments with the state ##|\uparrow\rangle##. In an ensemble view he could consider the original preparation and his measurement as a single new preparation.

However Wigner uses the superposed state I mentioned above.

Both of these assignments are from using the textbook treatment of QM.

You're saying if you are a frequentist something is wrong with this. What is it? Wigner's state assignment or the friends or both?

If this actually bears out and you are right I think you should consider writing something on this as I've never heard that frequentism alters details of Wigner's friend.
 
  • #150
vanhees71 said:
What I find is ridiculus is the claim that observable facts have played no role in creating is special and general theory of relativity or that these theory are irrelevant for phenomenology.
But that isn't the claim at all; the claim is that abstruse mathematics of a novel conceptualization of an observation, with the conceptualization generalizing far beyond the standard notion of how the original observation is perceived, all in order to reach a deeper understanding of things, was the driving force for discovering SR and GR. The key point to take away is that the mathematization only has to come/start once the conceptualization is correct; premature mathematization should be avoided at all costs!

As should be clear we don't need to go as far as Einstein, since as I said before many, many people before Newton had Kepler's data yet they were not actually using the methodology of theoretical physics as we know it today, simply because it wasn't explicitly invented yet by anyone. Newton however made an enormous conceptual leap, and then - being the best mathematician of his time - mathematicized his purely conceptual thoughts to the very extreme.

Conceptualizing and subsequently mathematicizing is de facto the original methodology of mathematical theoretical physics as invented by Newton and explained in detail in the beginning of his Principia. He did this purely to satisfy his own philosophical curiosity i.e. expand his own understanding of the world; he literally didn't even care to share his findings with anyone for years until goaded on by Halley et al.

This kind of extreme conceptualization is characteristic of some particular kinds of mathematicians and non-experimentally thinking physicists - who often become theoreticians - such as Feynman and Einstein as well. Looking back at the mathematicians we see it in Hamilton, Euler, Gauss, Riemann and Poincaré. It is arguable that this manner of thinking is rarely seen in a mathematician after Poincaré. Hadamard did a study on this phenomena and summarized it in a short book.
 

Similar threads

Back
Top