A Criteria for a good quantum interpretation

  • #251
PeterDonis said:
Ether theory is off topic here, but incorrect factual claims about it are worth correcting. Your claim here is false: the specific ether theory @AnssiH was referring to, Lorentz Ether Theory, is mathematically equivalent to (and makes all the same experimental predictions as) standard Special Relativity, not Newtonian mechanics.
It is on-topic in a historical context. I always thought it assumed Galilean relativity and the Lorentz Transformations are a mere appearance due to the shortening of objects as they move through the aether. However, that will be way off-topic in this thread, and I think it would best be taken up on the Relativity subforum. All predictions are equivalent to SR - but the reason is completely different. This may be due to confusion between the existence of a preferred frame and LET, where the aether is assumed to have certain properties. From a historical perspective, that would be interesting to get to the bottom of.

Thanks
Bill
 
Last edited:
  • Like
Likes AnssiH
Physics news on Phys.org
  • #252
bhobba said:
While we have the standard axioms of QM detailed on this forum, my statement is easier to see using the two axioms from Ballentine - QM - A Modern Development.

They are:

1. To each observation, there is a Hermitian operator O, whose eigenvalues are the possible outcomes of the observation.

2. The average of the outcomes, A, is given by the Born rule, which says A = Tr (OS), where S is a positive operator called the system state.

Note - it is an interpretational matter if S is something real, or simply a mathematical aid in calculating that average, or with a bit of probability theory, the probability of a particular outcome. 2. actually follows from 1. via Gleasons Theroem, whose foundation is non-contextuality. That was the famous mistake Von Neumann made in his no hidden variable proof, but that is another story for a separate thread.

The point is all QM assumes observations - that's it - that's all. What observation is, is not made particularly precise. To remedy that, it is usually taken as when some change is left, here in the macro world. More technically, once the theory is developed further, decoherence has occurred, and the system is in a mixed state where each element of the mixed state is a possible outcome. With even greater sophistication, it can be developed into a quantum theory of measurement to include thing like probes:
http://www.quantum.umb.edu/Jacobs/QMT/QMT_Chapter1.pdf

The point is at rock bottom; QM is a theory about calculating the probability of observations. The state is simply a calculational aid in determining those probabilities.

Thanks
Bill

Why do you think QM is a probability theory to begin with? As Mermin points out (p. 10*):
Quantum mechanics is, after all, the first physical theory in which probability is explicitly not a way of dealing with ignorance of the precise values of existing quantities.

*N.D. Mermin, ``Making better sense of quantum mechanics,'' Reports on Progress in Physics 82(1), 012002 (2019). https://arxiv.org/abs/1809.01639
 
  • Like
Likes bhobba
  • #253
bhobba said:
We discuss standard physics here that, by our rules, you agreed to when you signed up, is peer-reviewed papers, textbooks, talks by respected scientists etc. If ever there is any worry about a relevant source, you can write to a mentor. Instead of simply posting such a source to discuss it, as I requested, it indicates these are just ideas you have. Personal theories (unless published in peer-reviewed journals or equivalent) are not allowed. As I said, this aether thing is not an interest of mine, but as you mentioned, Glet would do:
https://www.ilja-schmelzer.de/glet/
Sorry, but I don't understand the point of this. Once GLET would do it, fine, you are satisfied, or not?
bhobba said:
I asked for the link between the aether and QFT. Yes GLET is a legit theory about that the aether, now you need to link it to QFT.
GLET derives essential parts of the SM, in particular the SM gauge group and the three generations of SM fermions. This is done using a lattice. Lattice theories are a reasonable way to define a QFT as far as this is possible at all given the infinities. So what do you miss here?
bhobba said:
Quantum fields, of which there are many, are not similar to the old idea of light as classical undulations of the aether. QFT obeys Lorentz symmetry, meaning it has the same properties regardless of the inertial reference frame.
I disagree. The math of QFT and quantum condensed matter theory is essentially the same. The Lorentz symmetry is the symmetry of the wave equation, thus, as long as the wave equation is fine as an approximation for condensed matter waves, there will be an effective Lorentz symmetry too. The Lorentz-transformed solutions are simply the Doppler-shifted solutions.
bhobba said:
LET explains this using the Lorentz hypothesis of length contraction. Internal length contractions of clock components changed time. In modern times we have atomic clocks, so such an explanation will not work.
Once the equations of the matter fields are wave equations with the same c in it, the basic idea of LET works in the same way. Lorentz idea in modern language was that if matter is held together by the EM field, the symmetry of the EM field will extend to whatever describes the matter too. Ok, matter is held together also by the other forces of the SM, but they are all wave equations, they all share the symmetry with the same c, and so they will also have the same Lorentz symmetry. This works for atomic clocks too, as well as for everything constructed out of SM fermions and gauge fields.

This is not a complex own theory, but an elementary symmetry consideration, it would not be worth a separate publication because it is too simple and obvious, therefore I don't hope to find such things in a separate publication.

Let's also note that for the question discussed - if realist non-local interpretations are in conflict with relativity - only the preferred frame hypothesis matters. And introducing it into SR changes nothing, given that we are allowed to ignore all non-preferred systems of coordinates and do all the computations in the frame named "preferred" in SR too (also an elementary consideration where I do not expect separate papers because it would be too trivial to be publishable).
bhobba said:
It is not well known that LET is only philosophically different to modern SR ...
It is. If you don't know such elementary things, this would be your problem. But I doubt - I think you know this very well. The preferred frame hypothesis is usually rejected out of philosophical arguments like your angles argument, not because of some problems with experimental predictions, and once you use such arguments (instead of, say, claims that LET is in conflict with MMX) you seem to know that there are no such experimental issues. If you like to reject some interpretations because of your personal philosophical preferences (like preferring a four-dimensional spacetime), your choice. This does not change the fact that I discuss here, namely that realist non-local interpretations of QT are compatible with relativity, and incompatible only with a particular interpretation of relativity which rejects the preferred frame hypothesis.
bhobba said:
All you have to do is post up a source so we can discuss your ideas. It is not hard. There are issues here, such as the quantum vacuum is the aether (it isn't), dark matter, or the CBMR etc., is the aether (again, they are not), but to discuss it, you need a source. Or you could ask why the CBMR is not considered the aether, but that would require a new thread - here you are assuming such exists.
I have given a source. It does not have any of the issues you have mentioned here. The aim is not to discuss that particular theory here, as well as your angels theory, but simply a reference to the actual state of research about the Lorentz ether interpretation of relativity. So, if you think, for whatever reasons, that the Lorentz ether is outdated, wrong or whatever, I disagree and refer to this paper. If you have something to criticize in that theory, it would be better to start a new thread. If not, what's the problem?
bhobba said:
The issue of non-locality in QM is different. I was even a bit confused by it until I came across a paper at CERN. The CERN server is not working for me right now, so I can't give a link, but here is the gist. Bell showed, assuming the Kolmogorov axioms, showed QM is incompatible with counterfactual definiteness. If we relax the Kolmogorov axioms requirement, i.e. assume from the start QM is a Generalised Probability Theory, then the whole 'issue' is bypassed.
That's the sort of "solutions" I don't consider worth to be studied. Giving up causality, realism, even probability theory or logic in a situation where we have simple causal realistic models working with classical logic and probability theory seem nonsensical to me.
 
  • #255
RUTA said:
Why do you think QM is a probability theory to begin with? As Mermin points out (p. 10*):
Because of Gleason's Theorem and the Born Rule. But thanks for the link. I will give it some study when I get a bit of time. I have found over the years, my view of QM has changed somewhat precisely because people like you have made me aware of things I had not considered, so I gave your reply my like. Thanks to Demysterfyer as well for the same reason.

Thanks
Bill
 
  • Like
Likes Demystifier
  • #256
Sunil said:
GLET derives essential parts of the SM, in particular the SM gauge group and the three generations of SM fermions. This is done using a lattice. Lattice theories are a reasonable way to define a QFT
All I am asking for is a paper where QFT is developed, including the aether. I know GLET because I discussed it many years ago with Ilja. It originally was not published, and many people thought it a disgrace because it was of publishable quality. I am not interested in an aether or associated ideas, but I am very interested in science being done properly. If a paper is judged by experts like Steve Carlip and John Baez to be of publishable quality, as GLET was, it should be published IMHO. That has now been rectified. But it is not a quantum theory. It suggests some interesting things related to the SM but is not a theory that reduces the SM; a quantum theory. I am sure Demystifyer will be only too happy to post one of his papers up. He could even start a new thread, and we can get to the bottom of exactly why the aether is a problem for QFT.

I have given my reasons (e.g. in LET light is undulations of the aether) why it is not compatible with our current knowledge. But much has happened in physics since the days of LET, so that is no surprise. If you want to talk about an aether and QFT, much more detail as found in a paper is required. With my mentor's hat on, that is also one of our rules. You can't discuss the aether except in a historical context or a peer-reviewed paper.

Added later:
As Demystifyer knows, the DBB interpretation of QM, according to some, implies an aether. It is thought by some this makes it difficult, perhaps impossible, to extend to QFT. It is just something I have read, but since claims are brought about an aether, it needs clarifying. If it is impossible to incorporate into QFT, that is a big problem - likely fatal for the whole idea.

Thanks
Bill
 
Last edited:
  • Like
Likes romsofia and Demystifier
  • #257
bhobba said:
You can't discuss the aether except in a historical context or a peer-reviewed paper.
Good point!
 
  • #258
bhobba said:
But it is not a quantum theory. It suggests some interesting things related to the SM but is not a theory that reduces the SM; a quantum theory.
What is "not quantum" in "The lattice fermion fields we propose to quantize as low energy states of a canonical quantum theory with ##\mathbb{Z}_2##-degenerated vacuum state. We construct anticommuting fermion operators for the resulting ##\mathbb{Z}_2##-valued (spin) field theory."? (from the abstract of arxiv:0908.0591)
bhobba said:
I am sure Demystifyer will be only too happy to post one of his papers up. He could even start a new thread, and we can get to the bottom of exactly why the aether is a problem for QFT.
Feel free to start a thread why the aether is a problem for QFT while we use QFT in condensed matter theory without any problem. For the problem that in condensed matter theory we have only bosons while we need also fermions see Schmelzer's paper.
bhobba said:
I have given my reasons (e.g. in LET light is undulations of the aether) why it is not compatible with our current knowledge.
Given that "our current knowledge" includes Schmelzer's papers I don't think you have given sufficient reasons for this claim.
bhobba said:
But much has happened in physics since the days of LET, so that is no surprise. If you want to talk about an aether and QFT, much more detail as found in a paper is required. With my mentor's hat on, that is also one of our rules. You can't discuss the aether except in a historical context or a peer-reviewed paper.
The peer-reviewed paper has been given, so there is no need to go to the Lorentz ether in a version from 1905 given that we have a version from 2009, together with a large background of literature of how to do QFT for condensed matter theories which can be applied in a quite straightforward way to quite general ether theories.

And, again, my actual intention is just to correct the wrong claim that non-local interpretations of QT are in conflict with relativity - they are not, given that interpretations of relativity with a preferred frame are viable.
 
  • Like
Likes Demystifier
  • #259
bobob said:
Why is everyone so gung ho to try to make quantum theory into some bastardized version of classical theory when classical theory cannot explain anything without approximating away or parameterizing things which are quantum in origin?

One reason my be related to attitudes towards the usefulness of fictions:
gentzen said:
I recently suggested as a possible explanation for the different prevalences of Mathematical fictionalism vs. physical fictionalism
Like many other mathematicians, I believe in a principle of 'conservation of difficulty'. This allows me to believe that mathematics stays useful, even if it would be fictional. I believe that often the main difficulties of a real world problem will still be present in a fictional mathematical model.
...
From my experience with physicists (...), their trust in 'conservation of difficulty' is often less pronounced. As a consequence, physical fictionalism has a hard time

But the specific example given to illustrate that explanation are themselves fictions: "So instead of accepting Bohmian mechanics as a useful fictional model with huge potential for analyzing various difficulties of quantum mechanics (and extracting insights about the real world from it), it was initially dismissed for being too obviously fictional."

It is a fiction, because we do know the historical reasons why Bohmian mechanics was dismissed:
1) Bohm himself had fallen out of favor of the physical community for political reasons.
2) A high dimensional configuration space cannot substitute for the expected 3+1 dimensional space, and arbitrarily breaking symmetries is ... well arbitrary, together with some more general objections summarized by Heisenberg.

In fact, a slightly different reason is that more low key figures like Arnold Sommerfeld and Max Born are ignored in favor of showmasters like Feynman or gurus like Bohr. And even Schrödinger and Heisenberg got mostly ignored when they tried later in their life to engage in more low key activities.
 
  • #260
bhobba said:
Because of Gleason's Theorem and the Born Rule. But thanks for the link. I will give it some study when I get a bit of time. I have found over the years, my view of QM has changed somewhat precisely because people like you have made me aware of things I had not considered, so I gave your reply my like. Thanks to Demysterfyer as well for the same reason.

Thanks
Bill
Sorry, I meant, "What is the nature of reality such that when we discovered QM, this theory ONLY gives results in terms of probability?" Obviously, QM is a probability theory :-)
 
  • Like
Likes bhobba
  • #261
bobob said:
"Interpretation" invariably means some scheme to make quantum mechanics behave like classical mechanics even at the expense of introducing very weird potentials and equally weird fictitious entities along with some "explanation" of how these things "really" don't violate other well known physics, like relativity.
I don’t think interpretation means making QM look like classical mechanics.
 
  • Like
Likes bhobba
  • #262
RUTA said:
Why do you think QM is a probability theory to begin with? As Mermin points out (p. 10*):

Quantum mechanics is, after all, the first physical theory in which probability is explicitly not a way of dealing with ignorance of the precise values of existing quantities.

*N.D. Mermin, ``Making better sense of quantum mechanics,'' Reports on Progress in Physics 82(1), 012002 (2019). https://arxiv.org/abs/1809.01639
I could be mistaken, but from I remember bhobbas older posts he views QM as a generalized probability theory, and this makes sense to me. (Unless I mixed up the users here, i now see there is bobo, if so i apologize for mixing it up)

But what I THINK mermin means with the above, is probably not to say that QM is not a generalized probability theory, but that probability itself can not be interpreted as ignorance measure? By ignorance I think "hidden variables" is implicit. Ie. it has more to do with the interpretation of "probability". This seems to be also more consistent with Mermins qbist view?

A few more lines below that quoted test Mermins writes also

"If one does take a subjective view of probability, then a QBist understanding of quantum mechanics is unavoidable. But I prefer to put it the other way around: the success of QBism in clarifying the murk at the foundations of quantum mechanics is a compelling reason for physicists too to embrace the widespread view of probabilities as subjective personal judgments"
-- p22, https://arxiv.org/abs/1809.01639

Edit: I just noticed post #260!

/Fredrik
 
  • #263
bhobba said:
What observation is, is not made particularly precise. To remedy that, it is usually taken as when some change is left, here in the macro world.
To be precise regarding the phrase "when some change is left":
One shoiuld keep in mind the so-called "negative-result measurements" or "Renninger-type experiments". The sheer possibility for a quantum system to interact with an detector with an assumed perfect efficiency is, although it does not click, sucient for the - so to speak - collapse of the wavefunction.
 
  • Like
Likes bhobba
  • #264
Sunil said:
Feel free to start a thread why the aether is a problem for QFT while we use QFT in condensed matter theory without any problem. For the problem that in condensed matter theory we have only bosons while we need also fermions see Schmelzer's paper.
QFT has methods used in particle physics and condensed matter physics. True. Indeed Wilson used such an analogy in his investigations on renormalisation. But that does not mean particle physics is a condensed matter theory. Such should be pretty obvious anyway - how can you use condensed matter (which is made of particles) to explain those particles? It makes no sense. Or rather you would need more details on exactly what the condensed matter in such a theory is or all you have is analogies.

We may be getting somewhere however in that we have a peer-reviewed paper using that idea:
https://arxiv.org/abs/0908.0591

I do not buy it for the reason I mentioned above - but others may be able to comment.

Thanks
Bill
 
Last edited:
  • #265
Fra said:
I could be mistaken, but from I remember bhobbas older posts he views QM as a generalized probability theory, and this makes sense to me. (Unless I mixed up the users here, i now see there is bobo, if so i apologize for mixing it up)

But what I THINK mermin means with the above, is probably not to say that QM is not a generalized probability theory, but that probability itself can not be interpreted as ignorance measure? By ignorance I think "hidden variables" is implicit. Ie. it has more to do with the interpretation of "probability". This seems to be also more consistent with Mermins qbist view?

A few more lines below that quoted test Mermins writes also

"If one does take a subjective view of probability, then a QBist understanding of quantum mechanics is unavoidable. But I prefer to put it the other way around: the success of QBism in clarifying the murk at the foundations of quantum mechanics is a compelling reason for physicists too to embrace the widespread view of probabilities as subjective personal judgments"
-- p22, https://arxiv.org/abs/1809.01639

Edit: I just noticed post #260!

/Fredrik
I agree, Mermin and QBists are saying that ALL physics is truly about assigning probabilities to subjective belief states and therefore, even if the physics theory provides probability = 1 as in classical mechanics, it is a probability theory nonetheless. In our papers cited earlier, QM is necessarily a probability theory due to the relativity principle applied to the measurement of Planck's constant h. So, I was just wondering why bhobba thought QM is necessarily about probabilities. There is no doubt that QM can be viewed as a GPT. My question is, again, why is it that our fundamental theory of physics is a GPT? Is QM simply an incomplete version of something more fundamental that is non-probabilistic (as one views the kinetic theory of gases)? Or is the probabilistic nature of our fundamental theory unavoidable (as in QBism and our principle explanation)?
 
  • Like
Likes Fra and bhobba
  • #266
bhobba said:
QFT has methods used in particle physics and condensed matter physics. True. Indeed Wilson used such an analogy in his investigations on renormalisation. But that does not mean particle physics is a condensed matter theory. Such should be pretty obvious anyway - how can you use condensed matter (which is made of particles) to explain those particles? It makes no sense.
It makes sense. One uses condensed matter (which is made of atoms) to explain phonons - pseudo-particles with have the same mathematical properties as particles, but are not really particles. It is much more than a mere analogy - the math, as far as it matters, is the same. We have a microscopic theory (the atomic lattice) which gives a large distance limit - a field theory. And if that field theory gives a wave equation, we can use standard QFT techniques to obtain the corresponding pseudo-particles.

The ether theory would have to explain the particles we observe as phonons of the hypothetical fundamental structure. The fundamental structure is something different from those pseudo-particles. In condensed matter theory, these are atoms which have approximately fixed positions and oscillate around them. Those atoms have nothing to do with the phonons, which are (in the simplest case) free particles. Similarly, the fundamental structure of the ether has at a first look nothing to do with the particles we observe - they are the pseudo-particles of the resulting lattice theory, resp. the field theory obtained in the large distance limit. If that field theory is the SM, then QFT gives the particles we observe.
bhobba said:
Or rather you would need more details on exactly what the condensed matter in such a theory is r all you have is analogies.
We may be getting somewhere however in that we have a peer-reviewed paper using that idea:
https://arxiv.org/abs/0908.0591
I do not buy it for the reason I mentioned above - but others may be able to comment.
Ok, fine. For the most obvious objection - condensed matter theory gives only bosons as phonons, but we also need fermions - Schmelzer gives an explicit construction which gives fermions too.
 
  • #267
stevendaryl said:
I don’t think interpretation means making QM look like classical mechanics.
I agree. QBism is an interpretation of QM and it specifically highlights that as a GPT, QM is NOT like classical probability theory. So, that interpretation of QM is based on the DIFFERENCE between classical mechanics (CM) and QM. It does place CM and QM under the same umbrella (probability theories), but as distinctly different types of probability theory.
 
  • Like
Likes Fra
  • #269
RUTA said:
So, I was just wondering why bhobba thought QM is necessarily about probabilities. There is no doubt that QM can be viewed as a GPT. My question is, again, why is it that our fundamental theory of physics is a GPT? Is QM simply an incomplete version of something more fundamental that is non-probabilistic (as one views the kinetic theory of gases)? Or is the probabilistic nature of our fundamental theory unavoidable (as in QBism and our principle explanation)?
Now I got the point of your why question.

Reading the posts I can not read out bhobbas answer, unless his position is supposed to be more agnostic about the why question, as in thinking it's too philosophical? I tried to skim the Dirac paper but couldnt' find an answer to the question.

Bhobba, is this a fair summary of your position or did I miss something?

(My view is the latter option, but i seek a constructive motivation also to the principal explanation, I see them as supporting each other in the following way: An explicit constructive theory requires a lot of assumptions, that are not possible to defend fully. But an evolutionary theory may tune itself, to the point that the ambigous starting points are less important asymptotically, and one could qualitatively spot asymptotic behaviour from the constructions, and take the principles of asymptotics. So to me the probably answer is that: the GPT structure should be a result of self-organisation of relations between the parts. But the argument would require a tentative construction, similar to the Muller paper, except as Muller himself points out there are major pieces missing that may be critical. I trace this poblem into a reconstruction of probability theory as measures of degree of belief itself and also why computational complexlity matters, as each agent could has certain computational capacity only, and when tossing information the loss is minimal when the code is optimal.)

"the approach of this paper does not yet give a full-fledged theory that has all the properties that one would like it to have; in particular, it is not yet able to treat processes of “forgetting” or “memory erasure”, and Section 7 suggests that such processes might be of fundamental importance."
-- Müller, https://arxiv.org/abs/1712.01826

/Fredrik
 
  • #270
I’ll side more with Heisenberg :-)
 
  • Like
Likes bhobba
  • #271
RUTA said:
Sorry, I meant, "What is the nature of reality such that when we discovered QM, this theory ONLY gives results in terms of probability?" Obviously, QM is a probability theory :-)
Theories which give results in terms of probability are usually named stochastic theories. Probability theory is reserved for the mathematical theory (Kolmogorov) used to describe probabilities. So there is only one probability theory (ok, may be some more in some alternative mathematics like intuitionism or so, but they, even if they exist, play no role).
 
  • #272
Fra said:
Bhobba, is this a fair summary of your position or did I miss something?
My point is that Merriman had a particular view of the philosophy of science. Mine is a variant of Hawings:
https://en.wikipedia.org/wiki/Model-dependent_realism

Heisenberg believed QM was complete in and of itself and had a paradigm type view similar to Kuhn (whose views I actually find maddening). Dirac believes, as I do, science, especially physics, makes constant progress towards better models. Sometimes that progress is fast and startling, like in the early days of QM; at other times, it is more mundane and leisurely - but progress is made all the time. My view is reality exists external to us. Science's job is to describe it using models. Those models get better and better. We may even reach an ultimate description that we can't improve upon (a theory of everything), or as Feynman put it, it may be like peeling the layers of an onion and never stop. We do not know. Either way, physicists will continually make progress by building on what went before. The models, in physics anyway, are mathematical, and it is mathematics that is the important thing. I do not think you get too far trying to use words, but some try that. I usually find that unsatisfactory. It is why when people ask for a popular book on QM, I recommend Susskind. It simply assumes a smattering of calculus even a grade 9 student could do in a weekend. That way, you get the 'real deal' from the beginning. As you progress, you will find things that need to be 'unlearned', so to speak, but that is minimised.

I think a good interpretation of QM is that it goes to the heart of the math, e.g. QM is simply a Generalised Probability Theory, likely the simplest after ordinary probability theory. So formally, to use everyday language, I ascribe to the Ensemble interpretation with decoherence added.

Thanks
Bill
 
Last edited:
  • #273
RUTA said:
I’ll side more with Heisenberg :-)
From your writings and book, I kind of figured that out. But I am always happy to buy and read books from science advisors, mentors, etc., on this forum.

Thanks
Bill
 
  • #275
bhobba said:
I think you should broaden your mathematical horizons:
https://en.wikipedia.org/wiki/Generalized_probabilistic_theory
Of course, one can name some mathematical structure a "generalized probability theory" if one likes. The question is if this makes sense. In my opinion, it makes no sense. Similarly for "generalized logic". These may be interesting structures from a mathematical point of view, but to name them "logic" or "probability theory" is only misleading.

Given that realist causal interpretations of QT are based on standard probability theory there is no reason to take "quantum logic" or "quantum probability theory" seriously.
 
  • Like
Likes Demystifier
  • #276
RUTA said:
My question is, again, why is it that our fundamental theory of physics is a GPT? Is QM simply an incomplete version of something more fundamental that is non-probabilistic (as one views the kinetic theory of gases)? Or is the probabilistic nature of our fundamental theory unavoidable (as in QBism and our principle explanation)?
My reason is a model building answer. I used to say it was because, in building models, a generally liked thing is differentiability and continuity to use our powerful calculus methods. I used to refer to a paper by Hardy explaining that but now have a different approach.

In QM, what is going on, we do not have direct experience with, all we have direct experience with is how it affects the world around us. I will take a straightforward model of such a situation - the Markov chain of turning a coin over. We can't see the coin is turned over; all we can do is see, say every second, that the coin is heads, tails, heads, tails etc. The Markov chain for this is straightforward - anyone that has studied Markov chains can do it. Here in Aus, HS students study simple Markov chains like this. So we have a model for the system that gives the result each second. But wanting to improve the model, we ask what is going on at say 1/2 a second. We can't know, just like we can't know what is going on at the quantum level. But we know math. Let us take the square root of the Markov matrix. Low and behold, we find it is complex. The resulting state at half a second is also complex. But states, being probability vectors, can't be complex. We see the need to extend probability theory to cover complex probabilities and connect them to ordinary probabilities. That is where Gleasons Theorem comes in. It basically says there is really only one reasonable way to do it. We can develop a whole generalised probability theory that way - in fact, it is QM. We get the two axioms of Ballentine. As per chapter 3 of Ballentine, we progress further from the simple symmetry principle that the probabilities are frame independent (formally, of course, we are invoking the POR) and, again for simplicity, use Galelaian relativity. We get Schrodinger's equation. Now that is where the fun really begins. We apply Ehenfest's Theorem and note something interesting. It is the same as the Hamiltonian from Classical Mechanics. Have we discovered something profound? We can go a bit further by deriving the path integral approach that will show classical mechanics necessarily falls out via the Lagrangian (Landau does this in Mechanics). It emboldens us to do something daring. Let's take classical Hamiltonians and quantise them by replacing energy, momentum etc., with the operators in Schrodinger's equation. What do we find from experimental predictions doing this? We get results in accord with the experiment. We have discovered something strange about nature - it uses the generalised probability model suggested by our simple Markov chain model extension. People may ask the question - why is nature like that? It is all very reasonable - we have mathematically extended probability theory to see what happens. But why does it need extending? That I can't answer - future research may shed light on it - but we are forced to accept every model is based on assumptions. If you manage to explain those assumptions, what you used to explain them also has assumptions. It is never-ending. We also find that this model we call QM is also wrong. QM must be extended to QFT (e.g. it can't explain spontaneous emission). And so it goes. It is simply a never-ending quest for better and better models. Why is something we can't really know. That does not stop people from trying. But mostly, I find such attempts unconvincing. Just me, of course. There is no right or wrong answer to that conundrum.

This is very similar to Dirac. He played around with equations until he understood them inside out then tried extending them. Sometimes it worked, sometimes it did not. Feynman and Landau were also masters at it. They were not concerned with why nature was like that - it was enough for them. There is a psychological issue here - some like me are satisfied with this approach - others are not. While having an excellent command of the technical apparatus of physics, like Dirac and Feynman, Einstein actually used his physical insight rather than just playing around with equations. Feynman did so as well. Landau, I do not know enough about to comment.

Bohr and Einstein were of course concerned with the philosophy of this, but later physicists less so. Feynman was actually contemptuous. Personally, I am ambivalent.

Thanks
Bill
 
Last edited:
  • Like
Likes Lord Jestocost
  • #277
Sunil said:
Of course, one can name some mathematical structure a "generalized probability theory" if one likes. The question is if this makes sense. In my opinion, it makes no sense. Similarly for "generalized logic". These may be interesting structures from a mathematical point of view, but to name them "logic" or "probability theory" is only misleading.
Reasonable view. But can't you see it is just semantics. And one of the silliest things there is to argue about is semantics. Still, if it floats your boat, that's fine with me.

There is no reason not to take them seriously either. If you hang around the interpretation subforum long enough, as I once did, you will find a lot of it is like that. Just personal preference. That's part of the reason I do not post in this subforum as much.

Thanks
Bill
 
  • #278
Sunil said:
And, again, my actual intention is just to correct the wrong claim that non-local interpretations of QT are in conflict with relativity - they are not, given that interpretations of relativity with a preferred frame are viable.
Exactly!
 
  • #279
Sunil said:
Of course, one can name some mathematical structure a "generalized probability theory" if one likes. The question is if this makes sense. In my opinion, it makes no sense. Similarly for "generalized logic". These may be interesting structures from a mathematical point of view, but to name them "logic" or "probability theory" is only misleading.

Given that realist causal interpretations of QT are based on standard probability theory there is no reason to take "quantum logic" or "quantum probability theory" seriously.
Completely agreed!
 
  • #280
Sunil said:
Probability theory is reserved for the mathematical theory (Kolmogorov) used to describe probabilities. So there is only one probability theory (ok, may be some more in some alternative mathematics like intuitionism or so, but they, even if they exist, play no role).
Claims that there would be only "one (true) logic" or "one (true) probability theory" feel too simplistic to me. For example, mathematical logicians agree that both syntax and semantics are important aspects of logic. Of course, minor differences in syntax or semantics don't give rise to (true) different logics, but major differences do: Propositional logic, predicate logic, quantified propositional logic, intuitionistic propositional logic, quantified intuitionistic propositional logic, or intuitionistic propositional logic with Hindley-Milner restricted quantification are all different logics. And they don't just exist, but they do play important roles in their respective application domains.

Sunil said:
Of course, one can name some mathematical structure a "generalized probability theory" if one likes. The question is if this makes sense. In my opinion, it makes no sense. Similarly for "generalized logic". These may be interesting structures from a mathematical point of view, but to name them "logic" or "probability theory" is only misleading.

Given that realist causal interpretations of QT are based on standard probability theory there is no reason to take "quantum logic" or "quantum probability theory" seriously.
Weasel words like "generalized" are indeed problematic. A respectable framework for "quantum logic" and "quantum probability theory" like Griffiths' consistent histories approach doesn't need them. It can explain directly why it satisfies the requirements of a logic that enables to base a probability theory on it.

Of course, being a logic and probability theory might not yet be enough for playing an important role in interesting application domains. But it is already much better than the typical "quantum logic" systems that focus too much on syntax and neglect (useful/illuminating) semantics.
 
  • #281
bhobba said:
But that does not mean particle physics is a condensed matter theory. Such should be pretty obvious anyway - how can you use condensed matter (which is made of particles) to explain those particles? It makes no sense. Or rather you would need more details on exactly what the condensed matter in such a theory is or all you have is analogies.
Analogies aren't bad per se. We should only stay away from misleading analogies and obsolete metaphysical baggage. Condensed matter theorists have good reasons to think of their theories as describing really existing structures in liquid helium-3. Pursuing the analogy in the opposite direction, it is likewise more useful to think of QFT as describing really existing correlations in the fabric of spacetime, rather than "particles" moving about and having undefined properties until they are "measured".

QFT is a self-contained theory ("closed" in the sense of Heisenberg): a machinery for calculating correlation functions. It is not mere semantics -- putting the emphasis on "particles" distracts from the essence of the theory.
 
  • #282
gentzen said:
Claims that there would be only "one (true) logic" or "one (true) probability theory" feel too simplistic to me. For example, mathematical logicians agree that both syntax and semantics are important aspects of logic. Of course, minor differences in syntax or semantics don't give rise to (true) different logics, but major differences do: Propositional logic, predicate logic, quantified propositional logic,
Their difference is essentially only that some other parts of the language have been formalized. Of course, as an object of mathematics something which has only "and" "or" "not" as operations is different from what has also ##\exists## and ##\forall##. And one can as well add many other sufficiently general words and formalize their properties and add them to the "rules of logic". But the rules for "and" "or" "not" hold in exactly the same way in all such extensions. So, these are no different logics.
gentzen said:
intuitionistic propositional logic, quantified intuitionistic propositional logic, or intuitionistic propositional logic with Hindley-Milner restricted quantification are all different logics.
Intuitionism is also not really different. Classical logic is part of it, with the classical ##\exists## translated as ##\lnot\forall\lnot##, so we have simply some additional formalized notion of "constructive existence", which otherwise follows classical logic too.
gentzen said:
And they don't just exist, but they do play important roles in their respective application domains.
Maybe, I don't care. For me, these are all part of classical logic.
gentzen said:
Weasel words like "generalized" are indeed problematic.
"Generalized" is not a weasel word, it has a quite well-defined meaning in mathematics - some properties of the non-generalized variant no longer holds, the non-generalized thing fulfills the axioms of the generalized thing too, but there are other, generalized things which are not non-generalized things. And this makes "generalized logic" nonsensical - the rules of logic hold, always, everywhere. One may criticize the words, and it is quite justified that the classical "exists" is slightly misleading, and ##\lnot\forall\lnot## more accurate. Similarly, "from A follows B" suggests some causal connection which "A or not B" does not have. Such objections against logic are possible and reasonable. But such possible improvements of verbal expressions does not change the logic.

A "quantum logic" which claims that some parts of classical logic are no longer valid is nonsense. The consideration of the mathematical formalism of things which follow axioms which are somehow similar but weaker than those of classical logic is fine, if that mathematical object is named in a non-misleading way, lattice or so. Just to name it "logic" makes no sense.
gentzen said:
A respectable framework for "quantum logic" and "quantum probability theory" like Griffiths' consistent histories approach doesn't need them. It can explain directly why it satisfies the requirements of a logic that enables to base a probability theory on it.
I have no objection against Griffiths' [in]consistent histories approach as far as it is not in conflict with classical logic and probability theory. If it would be in conflict with classical logic, it would have to been thrown away immediately. But I see no base for such a claim.
 
  • #283
bhobba said:
Reasonable view. But can't you see it is just semantics. And one of the silliest things there is to argue about is semantics.
I see that it is just semantics. And from the point of view of a mathematician, there would be no point to argue about this. Mathematicians, if they want to study something different, give it a different name and everything is fine.
In physics, it should be in principle similar. Usually it works, nobody is confused by "color" in QCD or so. But there are some places in physics where we have misleading names, and where this leads to real confusion. "quantum logic" is one such place. There are really people out there who think that the rules of logic or probability theory are no longer valid in the quantum domain. Really. Or who think that "nonlocal" is something horrible and don't recognize that a theory which is a local in any meaningful sense of the word but allows a maximal speed greater than c is named today "nonlocal".
 
  • #284
Sunil said:
A "quantum logic" which claims that some parts of classical logic are no longer valid is nonsense. The consideration of the mathematical formalism of things which follow axioms which are somehow similar but weaker than those of classical logic is fine, if that mathematical object is named in a non-misleading way, lattice or so. Just to name it "logic" makes no sense.
My point (or claim) is that the problem of "quantum logic" is that its semantics is not sufficiently interesting or useful, and that it therefore fails to be a logic. (It has models, because it is an algebraic structure, but those models are not sufficiently interesting. Probably Birkhoff and von Neumann also defined a more interesting semantics, but I am not familiar with it, and I guess this is also true for you.) Whether all axioms of classical logic are still true is less important, as demonstrated by intuitionistic logic.

More general, my claim is that your position that there is essentially only one logic and only one probability theory is closely related to the fact that the role of a semantics for a logic remains a bit obscure for you. This is no surprise, because the syntax of a logic is much more visible and also its role is easier to appreciate.
 
  • #285
bhobba said:
My reason is a model building answer.
...
This is very similar to Dirac. He played around with equations until he understood them inside out then tried extending them. Sometimes it worked, sometimes it did not. Feynman and Landau were also masters at it. They were not concerned with why nature was like that - it was enough for them. There is a psychological issue here - some like me are satisfied with this approach - others are not. While having an excellent command of the technical apparatus of physics, like Dirac and Feynman, Einstein actually used his physical insight rather than just playing around with equations. Feynman did so as well. Landau, I do not know enough about to comment.
Thanks this post explained more than the other one, yes it seems the psychology here is settings us apart.

As I am driven by how I think an intrinsic inference process works, by an imagined inside physical agent, and is bothered by the fact that QM as i stands is NOT cast in this form, and hence appears incomprehensible. I am one of those that is not satisfied with the Dirac approach as you describe it. I am convinced there is a deeper graspable reason for why the laws of physics, at some level looks like some pretty rational rules of inference, that may unify the laws of physics and the rules of scientific inference, at a deep level? This for example begs to compare questions: what is the inferential status of a "state"? What is the inferential status of "law"(ie hamiltonian)? If both are the result of an inference (generalized measurement/inference) then why are they treated assymmetrically in current theory?

This is why I "understand" QM as i stands, as almost an theory inference relative to classical observers, looking at atomic scale systems. In this domain QM is fine! But what about the rest? How about cosmological perspectives? or what about big bang, where the classical observers was not around? And what about explanation of hamiltonjians of standard model? If the laws of physics ARE rules of inference, then shouldn't the hamiltonians of standard model correspond to some optimally stable code?

/Fredrik
 
  • #286
bhobba said:
We have discovered something strange about nature - it uses the generalised probability model suggested by our simple Markov chain model extension. People may ask the question - why is nature like that? It is all very reasonable - we have mathematically extended probability theory to see what happens. But why does it need extending? That I can't answer - future research may shed light on it - but we are forced to accept every model is based on assumptions.

Thanks
Bill
Our answer to that question, i.e., “Why the qubit instead of the classical or generalized bit?” is the relativity principle, as we explain in our publications. It’s quite simple, but you have to accept a principle explanation without any constructive counterpart (like with special relativity).

I agree with you that all theories are constantly being improved, e.g., Hamiltonian and Lagrangian methods for Newtonian mechanics. But, those improvements don’t invalidate what has been improved. Heisenberg’s point is that Newtonian mechanics still holds in its realm of applicability, Hamilton’s methods do not “disprove” Newtonian mechanics. Likewise, QFT improves QM, but QM still holds in its realm of applicability. I think that’s what Heisenberg meant by “closed.” If QM is really ”open,” then it could be refuted in its realm of applicability. I doubt that will happen.
 
  • Like
Likes vanhees71
  • #287
RUTA said:
Our answer to that question, i.e., “Why the qubit instead of the classical or generalized bit?” is the relativity principle, as we explain in our publications. It’s quite simple, but you have to accept a principle explanation without any constructive counterpart (like with special relativity).
Given RUTAs perspective, I wonder, how many of us here Accept without requiring further explanation of say the concept of the upper bound of signals in space, which is the key postulate beyond observer equivalence leading to SR?

How many of us asks, and seeks an explanation, for WHY there exists and observer invariant upper bound of signal speed?

/Fredrik
 
  • #288
Sunil said:
But there are some places in physics where we have misleading names, and where this leads to real confusion. "quantum logic" is one such place. There are really people out there who think that the rules of logic or probability theory are no longer valid in the quantum domain. Really. Or who think that "nonlocal" is something horrible and don't recognize that a theory which is a local in any meaningful sense of the word but allows a maximal speed greater than c is named today "nonlocal".
I agree there seems to be a mess. Sometimes people try to recast things into "forms" that suit their own personal insight and mental abstractions. I am certainly guilty of that myself. But at the same time, if one can make unambigous progress in the end one can postpone discussing formal classicifactions. The fact that we have different mental abstractions makes communication harder at times.

To me "quantum logic" is really just an extension using the normal logical rules, where one replaces a single event space, with several of them, that are not independent (and hence does not commute in the inference scheme). I also don't like thinking that this is some magic quantum logic. I see nothing magic in this, instead one can understand it using normal logic of course. The question is more, WHY would and agent, hold multiple dependent sets of information? This is where the inference insight suggests its beacuse it's more efficient. Ie. the quantum agent will have an aevolutionary advantage and outcompete the classical agent. This wording, I am thinking probalby makes not sens or adds no insight for those who think in terms of say bohmian ways. I just accept that it's difficult to understand each others logic. I find it interesting still that even different approaches meet at times and find commong junctions.

/Fredrik
 
  • #289
Fra said:
Given RUTAs perspective, I wonder, how many of us here Accept without requiring further explanation of say the concept of the upper bound of signals in space, which is the key postulate beyond observer equivalence leading to SR?

How many of us asks, and seeks an explanation, for WHY there exists and observer invariant upper bound of signal speed?

/Fredrik
As long as general relativity is not disproven by observation, I tend to "believe" in it as a valid description of Nature and thus that there's a limiting velocity for causal effects/signal propagation (though one should be cautioned that as everything this holds within GR only locally!).
 
  • Like
Likes dextercioby
  • #290
Fra said:
Given RUTAs perspective, I wonder, how many of us here Accept without requiring further explanation of say the concept of the upper bound of signals in space, which is the key postulate beyond observer equivalence leading to SR?

How many of us asks, and seeks an explanation, for WHY there exists and observer invariant upper bound of signal speed?

/Fredrik
Note, we're not claiming that there is no constructive counterpart to the principle theory of SR. All we're pointing out is that it's been 116 years since SR was published and the physics textbooks still do not present a constructive counterpart when introducing SR. It seems most physicists have given up on that idea.

Our goal with QM is to show that it can be introduced as a principle theory in analogous fashion. Indeed, you can even use the same relativity principle (just applied to a different constant of Nature, h). We believe this offers an additional means of introducing QM to first-year students, so as to resolve the mystery of entanglement just as the mysteries of time dilation and length contraction are resolved in SR. In SR, the relativity principle + c means that Alice says Bob's clocks run slow and his meter sticks are short, while Bob says the same about Alice's clocks and meter sticks. In QM, the relativity principle + h means that Alice says Bob's measurements of X are only correct on average, while Bob says the same about Alice's measurements of X.

It's been 86 years since the EPR paper with neither a (consensus) constructive nor principle account of entanglement. At least now we have a principle account as robust as that for SR. We believe that constitutes some progress.

The QM question analogous to your SR question would be, "Why does there exist an invariant minimum size h to the action in an emission or absorption of energy/momentum?" The principle account does not answer that question nor the one you asked regarding c.
 
  • Like
Likes Fra
  • #291
RUTA said:
Note, we're not claiming that there is no constructive counterpart to the principle theory of SR. All we're pointing out is that it's been 116 years since SR was published and the physics textbooks still do not present a constructive counterpart when introducing SR. It seems most physicists have given up on that idea.
one you asked regarding c.
I rather like the following I have posted many times before:
http://www2.physics.umd.edu/~yakovenk/teaching/Lorentz.pdf

Some books line Rindler and Morin have similar arguments. I rather like Morin because the classical mechanics part has difficult problems and solutions that are very instructive to go through - it also introduces advanced ideas like Noether's Theorem.

Your 'question' would be, why is the c from that derivation finite. There are many experimental and theoretical reasons why it is the speed of light, but it does not answer why nature chose that. I have no idea, except, of course, what future research may turn up. Even more basic is why is an inertial frame, which strictly speaking is an abstraction that does not exist, so useful? It is thought that deep in interstellar space, inertial frames to a high accuracy do exist, but that does not explain why the concept is so useful. The POR is a statement about something that is just an abstraction. To me, that is really strange. In part, it goes to the heart of a models relationship to what it is modelling. I did a formal course on mathematical modelling, and the lecturer mentioned it but just said it's part of why is the world actually comprehensible.

RUTA said:
The QM question analogous to your SR question would be, "Why does there exist an invariant minimum size h to the action in an emission or absorption of energy/momentum?" The principle account does not answer that question nor the one you asked regarding c.
Well, we have similar derivations to c, such as found on page 82 of Ballentine, and even what Dirac wrote all those years ago in Principles of QM Chapter 4 based on analogies with Poisson Brackets. Again, that explains why such a number exists, not its value - specifically why it is non zero. Again its value is determined from experiments.

To me, it is part of a wider issue; namely, our theories often demand a value exists, but why it has the value it has; blank-out. I think that is an issue several physicists are grappling with. Possible explanations like the anthropological principle have been proposed.

If history is any guide, the answer often turns out to be completely unexpected. Then afterwards, people say - why did we not see that before. It was like that with QM. Here is a 'modern' introduction:
https://www.scottaaronson.com/democritus/lec9.html

Nothing like that would have occurred to the early pioneers.

Thanks
Bill
 
  • #292
RUTA said:
Note, we're not claiming that there is no constructive counterpart to the principle theory of SR. All we're pointing out is that it's been 116 years since SR was published and the physics textbooks still do not present a constructive counterpart when introducing SR. It seems most physicists have given up on that idea.
I absolutely got that point, and that should make your claims in general more interesting.

/Fredrik
 
  • #293
AnssiH said:
It's well known that Lorentz ether theory is only philosophically different from Special Relativity - Special Relativity did not push any specific interpretation forward strongly. But Minkowski did, and it is harmful that there are so many people professionals alike, who are not clear on the difference between Minkowski and Special Relativity.

bobob said:
No, what might be "harful" is not really understanding what you are objecting to. If you assume some ether theory, you have implicitly chosen a spacetime geometry that gives you Newtonian physics, i.e., a central extention to the Galilean group, which actually is more complex than choosing the Poincare group for the spacetime of special relativity. So, not only did you choose a spacetime geometry without realizing it, you are trying to introduce a fictional ether that makes it even more complicated as well as makes unphysical predictions to turn it into the spacetime you did not choose for some weird reason known only to ether advocates. As for unphysical predictions, if there is an ether, there should be pressure waves associated with it, yet none have been observed. Choose the right spacetime from the beginning and avoid all of the problems associated with choosing the wrong one and trying to compensate for choosing incorrectly.
Hi Bobob,

I'm afraid your response is a great example of just the kind of slight confusion that I referred to as "harmful". SR is taught today from Minkowski's spacetime perspective in so much that most people assume they are the intractably married. Just because most people are used to think in terms of some paradigm, doesn't mean that is the only paradigm. Lorentz' ether theory is the theory where Lorentz transformation came from, SR is a paper pointing out that the convention of relativistic simultaneity yields the same transformation as symmetrical between reference frames. That equality does not tell us what form ontological reality takes, of course.

To be a scientific mind, it's imperative to differentiate between what can be known, and what can't.

So the way I'd prefer to view this topic is that the only relevant (philosophical) difference is whether or not relativistic simultaneity is to be taken as representing an observational limit (because systems only synchronize using fastest information speed available), or ontological limit. (real existence).

If you are familiar with Einstein's paper, you probably know he very much emphasized the observational limit factor, but did not explicitly claim this way or another in ontological sense. That is why it could be argued it is more philosophically neutral.

In Minkowski's view relativistic simultaneity represents ontological limit, which requires static reality. Unlike Special Relativity.

In my opinion it is little bit silly to claim that static reality is philosophically more elegant idea than a dynamic one, and it is harmful that so many people believe this is the only option we have, because they believe Minkowski = Special Relativity. 🤷‍♂️ When they do, they close their eyes from possibilities that exist well within Special Relativity, but not within Minkowski (Exactly what @Sunil was pointing out) -Anssi
 
  • #294
AnssiH said:
The only difference for Lorentz version is that there is a specific state to reality - we just can't probe what it is.

PeterDonis said:
If by "state" you mean "preferred frame" or "preferred set of surfaces of simultaneity", this is correct.
Basically yes, but want to comment still that I'd greatly prefer if people think of this from the perspective of "what does relativistic simultaneity represent". That's why I am drawing attention to the question; "is there some real state to reality before we can see it".

If one assumes there is - we just can't probe what it is because we can't know C - then they are assuming the core philosophy of Lorentz.

If one assumes there is not - actually the state of reality around us is a static spacetime block and our conscious experience is an illusion concocted by how mind works - then they are assuming the core philosophy of Minkowski.

It just blows my mind that so many people tacitly believe the latter is the only option we logically have :smile:

-Anssi
 
  • #295
bhobba said:
Your 'question' would be, why is the c from that derivation finite. There are many experimental and theoretical reasons why it is the speed of light, but it does not answer why nature chose that. I have no idea, except, of course, what future research may turn up.

I have convinced myself of some intuitive abstractions sufficient to guide me forward, but from there to work out the explicits is a big step. But my hunch is that the reasons for the upper bound of "change", is related to the lower bound of action even in a constructive approach. Constructing "inside measures" it seems natural than any agent, has upper and lower bounds. Because the agents information and processing capacity is limited. This is why distinguishing arbitrary high rates of change, would violate the "capacity" of the agent.

Think about it, intuitively, to confidently infer and big change, is the same thing as the confidently infer something a priori improbable. This implies a tension. Because "big change" means a high information divergence, and in terms of spacetime, space is ORDERED as per som locality principle. Anything that distorts this, would likely also eventualyl distort spacetime. It's not hard to imagine, that for an angent here has to be a kind of uncertainty relation between the confidence in an improbable observation, and the a confidence in the prior. Ie, there is an inertia at play here, that means that a finite agent, can not my arbitrary confidence inferences and still survive. So then considering the asymptotic limit where agents are supposed to be stable, one gets the observer equivalence.

To me the problem is to find the right mathematical or algorithmic abstractions needed to turn this into a proper model, and then figure out how to solve it. Right now I am leaning towards Agent based models as the most natural ones for modelling "interacting theories".

So I feel associating the two constants as principal level as RUTA does makes sense, as i think they are related even in the constructive approach (if anyone would suceed)

/Fredrik
 
  • #296
bhobba said:
It is on-topic in a historical context. I always thought it assumed Galilean relativity and the Lorentz Transformations are a mere appearance due to the shortening of objects as they move through the aether. However, that will be way off-topic in this thread, and I think it would best be taken up on the Relativity subforum. All predictions are equivalent to SR - but the reason is completely different. This may be due to confusion between the existence of a preferred frame and LET, where the aether is assumed to have certain properties. From a historical perspective, that would be interesting to get to the bottom of.
Yeah that's an astute observation - I think the topic is possibly muddied up by various ideas of assigning properties to ether that may not have anything to do with the core issue of "nature of simultaneity". Indeed the only difference logically is than in LET you just assume some universal frame and stick with it.

Now this could be half a step off-topic as well, but I think philosophically interesting point to think about. People usually view Big Bang theory from General Relativistic perspective where the idea is that reality started from a spacetime singularity, and when things cooled down enough, the universe became transparent. The remnant of this event being of course cosmic microwave background radiation.

The idea implies that at some point of time all of universe was plasma sitting roughly in single reference frame, and starting from the singularity would have roughly cooled down to transparent state simultaneously everywhere. Which is the explanation why CMBR is still today emanating - roughly - from single magical frame (all the images of CMBR you see are actually synthetically corrected to remove the doppler shift contribution of the motion of Earth and our galaxy).

So within that theory I would say that it would be perfectly reasonable to assign "universal simultaneity" to our ability to measure the simultaneity of the cooling of "all of reality" (in that popular interpretation). I mean the basis of this philosophy was "if we can't measure it, it doesn't exist". Well, if all we can measure sat in a single frame, then "no other frames exist" is the stripped down philosophy you'd get.

Cheers,
-Anssi
 
Back
Top