Evaluate this paper on the derivation of the Born rule

In summary, The paper discusses the Curie Wiess model of the quantum measurement process and how it can be used to derive the Born rule.
  • #316
I don't understand what you don't understand. Just about all of QM is about transitions and their probabilities -- including observation and the projection of a prepared eigenstate in one representation into a different representation. Why on Earth are you introducing "multifactoral noise"? It has nothing to do with the Born rule. Let's not get off-topic again, please.
 
  • Like
Likes Auto-Didact
Physics news on Phys.org
  • #317
mikeyork said:
So we still need a hypothesis that small transitions in the total system are more likely, whether macroscopic or microscopic -- i.e. the Born rule. Then we can view this as the reason for a fairly stable (gradually evolving) universe.
I don't see why the Born rule has anything to do with 'small transitions in the total system are more likely ...'.

No doubt you've explained this so I'll read your posts.
 
  • #318
I think the discussion about the difference between probability and statistics (leaving aside the part about the connection with scalar products) of the previous pages(or as PeterDonis put it:'the idea of a "fundamental concept of probability" independent of statistics') is an instance of what the Loschmidt's or irreversibility paradox deals with. The confrontation between statistical reversibility(counting frequencies after the fact) and the predictive probabilistic introduction of irreversibility concealed in the H-theorem: Just by assuming any probabilistic model as opposed to others an element of ireversibility is conceptually inserted. But the quantum formalism unitarity is just not capable to make this distinction, it simply assumes reversibility.

The particular way in which the Born rule model of probability happens to give the right answers on top of this is not clear so far, and I'm not convinced that the suggested connection with scalar products clarifies much.
 
  • #319
Perhaps in so far as once the "for all intents an purposes" threshold or irreversibility is reached the state exhibits a "turning point" . Measured and unmeasured states become as one so to speak and the Born rule falls out.
 
  • #320
Time reversibility at the scalar product does not imply unconditional reversibility because the scalar product represents a conditional probability. The distinction is due to the fact that we are dealing with probabilities conditioned on prior circumstances. The breaking of an egg could be just as easily reversed if we could only get all the atoms together in their appropriate locations at the same time. At the QM level, particle decay amplitude does not give the same probability as reconstituting the particle from its decay products unless one can get the decay products together. You cannot reverse n ##\rightarrow## p + e + ##\nu##, unless you can collide an electron and a proton: e + p ##\rightarrow## n + ##\bar{\nu}##.
 
  • Like
Likes Auto-Didact
  • #321
Agreed. It is conditional on the basis, which is dependent on the measurement. If said measurement then results in an increase in entropy the odds of reversibility reduce.
 
  • #322
Carefully reading this thread, after all is said and done, as others have already said @A. Neumaier's thermal interpretation doesn't seem to offer anything at all of physical content in the issue of answering where probability arises from within quantum theory (QT), nor does it seem capable of resolving the infamous measurement problem. The splitting of QT into a 'shut up and calculate' and an 'interpretation' is somewhat of a caricature of the actual situation. I will try to illustrate the actual issue as simple and as clear as possible so that practically anyone can follow this discussion.

QT in practice consists of two inconsistent mathematical procedures, namely
1) a completely deterministic mathematical theory describing unitary evolution of the wavefunction,
2) an informal probabilistic rule of thumb for evaluating outcomes of experiments.
Instead of 'shut up and calculate', the theoretical and mathematical aspect of QT that physicists learn comprises 1, while 2 is a more informal postulate for the experimental side of QT.

Now it so happens that for all physical theories prior to QT, all physical theories have the format of 1, namely deterministic differential equations describable by mathematics, while not having or not having to worry about anything like 2. Many interpret this as QT being purely a mathematical model just like for example Newtonian mechanics can be viewed as a purely mathematical model in some mathematical theory, but this is plainly false in the case of QT.

The fact of the matter is that without something like 2, QT cannot be compared to experiment while for all other physical theories they can; without 2 it isn't clear at all that QT is anything more than merely some mathematical model with no clearly decidable physical content, i.e. it would not be verifiable by experiment and therefore could not be regarded as a scientific theory, regardless of the possibility that it actually was describing nature and we were merely to stupid to know how to test it.

The derivation the paper in the OP gives is as others have also said a circular argument, but with a twist, namely a novel axiomatic convention gained by redefining or generalizing probabilistic notions, and this new mathematical convention is subsequently specifically chosen in order to avoid having to admit the assumption of regular probabilistic notions, which are not contained in 1 since the probabilistic results are mathematically clearly not part of unitary evolution in the first place.

The problem however isn't merely that 1 and 2 are inconsistent in that they describe logically quite distinct things given that the mathematics of 1 describes something deterministic while that of 2 describes something probabilistic, the situation is worse i.e. one is not derivable from the other because they actually address two different 'domains' so to speak, namely what occurs intratheoretically according to the mathematical model when things are left to themselves (1) and how to compare the predictions of the model to the outcomes of experiments using 2.

Since these two procedures however together can be used to describe experiments, what is implied here is that they have a common mathematical source from which they both spring and that therefore 1 and 2 are merely disjoint sectioned off descriptions off of some more general mathematical description which has eluded us for almost a century. Using 1 to derive 2 is therefore a non-starter; the ones here who intuitively recognize and/or immediately accept this argument are psychologically probably more experimentally inclined opposed to the mathematically and theoretically inclined thinkers. What is good mathematical practice for a mathematician doing mathematics according to contemporary conventions, i.e. a mathematical derivation by choosing other axioms and ending up with indistinguishable consequences will never pass as good mathematical practice for physics given that the extent of physical theory remains experimentally undetermined.

This thread would serve as a very interesting and revealing case study of the rampantly occurring different kind of attitudes and point of views (NB: mathematician, mathematical physicist, theoretician, experimenter, probability theorist, statistician, not even to mention different points of view within mathematics...) occurring, interacting and competing here in the discussion and interpretation of an unsolved issue in physics among physicists.

Without a doubt these intersubjectivities also occur in the actual practice of physics among different practitioners and cause intense disagreement. This leads to the creation of scientist factions upholding some particular point of view. It is clear that the subjectivities on display are idiosyncratic psychological consequences based in large part on particular ways of looking at things coherent with the person's specific form of academic training, including group-based and personally developed biases about what is (more) fundamental.

For any outsider, especially an informed layman, the intersubjectivities displayed here would probably be very surprising given the 'objectivity' everyone expects from physics, and certainly the usually objective nature of mathematics, but that is a red herring: although physics and mathematics may be objective, physicists and mathematicians certainly are not. Scientific arguments are only true or objective relative to some theory or some preliminary theory; a theory need not be true for an argument based upon it to be objective and whether that theory is valid or not, is a matter to be decided by a carefully analyzed experiment.

All along the way of the actual historical development of mathematics and physics, intuitions, which haven't always been readily explainable, were constantly made and they are only objective relative to some specific mathematical theory given that it exists in the first place; to a rival mathematical theory what else can these intuitions be but subjective? To deny this aspect of mathematics is not merely not to be objective, but to be subjective and wholly blinded of being so. In the practice of mathematics, respected intuitions are called conjectures, while in physics they are called hypotheses.

In stark contrast with what a scientist may think or claim, the choice between scientific arguments is almost a completely subjective one prior to experiment. The clear subjectivity of many scientific arguments do not merely occur in empirically based sciences like psychology and sociology, they also run rampant in quantitative sciences like physics and economics, only here the subjectivity is hidden within the mathematics. Moreover, it seems the situation is far worse in the purely rational field of mathematics, precisely because there is no such thing as measurement capable of constraining thoughts; instead there is only deductive proof. It should be obvious that no one, not even the mathematician, proceeds by deductive reasoning alone and therefore large parts of mathematics do not rest on deductive certainty or indeed any certainty at all.

tl;dr: just listen to Feynman:
I) Physics is not mathematics and mathematics is not physics; one helps the other
II) Each piece, or part, of the whole of nature is always merely an approximation to the complete truth, or the complete truth so far as we know it. In fact, everything we know is only some kind of approximation, because we know that we do not know all the laws as yet. Therefore, things must be learned only to be unlearned again or, more likely, to be corrected. The test of all knowledge is experiment. Experiment is the sole judge of scientific “truth”.
III) The first principle is that you must not fool yourself – and you are the easiest person to fool
IV) A very great deal more truth can become known than can be proven.
V) Directed towards those who are terrible at explaining things clearly or simply refuse to do so:
Feynman said:
The real problem in speech is not precise language. The problem is clear language. The desire is to have the idea clearly communicated to the other person. It is only necessary to be precise when there is some doubt as to the meaning of a phrase, and then the precision should be put in the place where the doubt exists. It is really quite impossible to say anything with absolute precision, unless that thing is so abstracted from the real world as to not represent any real thing.

Pure mathematics is just such an abstraction from the real world, and pure mathematics does have a special precise language for dealing with its own special and technical subjects. But this precise language is not precise in any sense if you deal with real objects of the world, and it is only pedantic and quite confusing to use it unless there are some special subtleties which have to be carefully distinguished.
 
  • Like
Likes eys_physics, zonde, Mentz114 and 2 others
  • #323
Auto-Didact said:
QT in practice consists of two inconsistent mathematical procedures, namely
1) a completely deterministic mathematical theory describing unitary evolution of the wavefunction,
2) an informal probabilistic rule of thumb for evaluating outcomes of experiments.
Instead of 'shut up and calculate', the theoretical and mathematical aspect of QT that physicists learn comprises 1, while 2 is a more informal postulate for the experimental side of QT.

Hmmm. Do you think Andrew Gleason would agree with that?

Thanks
Bill
 
  • #324
bhobba said:
Hmmm. Do you think Andrew Gleason would agree with that?

Thanks
Bill
What Gleason personally would agree with I have no idea, but Gleason's theorem is a very subtle and interesting red herring in this matter. The theorem only shows that any function which assigns probability to measurement outcomes related to 1 must take the form of Born's rule.

The key point is that such a function is not at all part of 1 itself; this is clear by the fact that in proving the theorem, the function is assumed by fiat to exist at the outset of the argument i.e. the procedure of proving the theorem sneaks in an extraneous function by assumption of its existence and then demonstrates that any such possibly existing assumed extraneous functions will always have the form of Born's rule.

This theorem definitely doesn't count as an intratheoretical part of 1, i.e. it isn't part of the theoretical description of unitary evolution in any of its various mathematically equivalent forms. The theorem is instead an atheoretical justification from the context of mathematics itself for the assigning of things outside of 1, namely probabilities, to things inside 1 in the context of experiments; in other words, the theorem is a mathematics based atheoretical justification of 2.

The mathematical consistency of the existence of such an extraneous function in the empirical context of 1 only shows that something like 2 is possible; it does not in any way imply or guarantee that the intrinsic intratheoretical properties of 1 (determinism i.e. differential equations) and 2 (probability) are mathematically consistent with each other.
 
  • Like
Likes protonsarecool, Boing3000 and vanhees71
  • #326
jeffery_winkler said:
The following paper claims to derive Born's rule.

https://arxiv.org/pdf/1801.06347.pdf
I only gave this a quick readthrough but assuming their logic is valid, this is the first explanation I have ever seen which gives an actual physical reason, which supports both Wheeler's intuition on the matter and seems to give a positive answer to Bohr's conjecture. This sounds almost too good to be true. Moreover, if true, this implies that string theory is either false or at best mathematically equivalent to QFT.
 
  • #327
Auto-Didact said:
The derivation the paper in the OP gives is as others have also said a circular argument
Where is the circularity? They carefully avoid it by distinguishing between q-notions (arising in the mathematical theory with statistical names) and true statistical notions.
 

Similar threads

Replies
3
Views
1K
  • Quantum Physics
Replies
0
Views
64
  • General Discussion
2
Replies
54
Views
3K
  • Quantum Interpretations and Foundations
2
Replies
48
Views
4K
  • Quantum Interpretations and Foundations
2
Replies
47
Views
1K
  • Quantum Interpretations and Foundations
Replies
11
Views
1K
  • Quantum Interpretations and Foundations
2
Replies
49
Views
2K
  • Quantum Interpretations and Foundations
2
Replies
42
Views
5K
  • Quantum Interpretations and Foundations
Replies
2
Views
774
Replies
13
Views
5K
Back
Top