A Evaluate this paper on the derivation of the Born rule

  • #301
mikeyork said:
I don't care if you don't like my argument.

Is it just your argument? (If it is, it's off topic here--you should be publishing it as a paper.) Or does it appear in, e.g., some standard reference on QM? If so, what reference?
 
Physics news on Phys.org
  • #302
PeterDonis said:
Is it just your argument? (If it is, it's off topic here--you should be publishing it as a paper.) Or does it appear in, e.g., some standard reference on QM? If so, what reference?
It's not just my argument. It's a trivially simple logical observation about the nature of the Born rule -- what this thread was originally about until you and Mentz114 derailed it.
 
  • #303
A. Neumaier said:
No. Failure of Born's rule is completely unrelated to failure of quantum mechanics. The latter is applied in a much more flexible way than the Born rule demands. It seems that we'll never agree on this.
No, we'll never agree to this, because to use QT in "a much more flexible way" (what ever you mean by this), you need Born's rule to derive it.

For example, in some posting above you complained about the inapplicability of Born's rule to the case that the resolution of the measurement apparatus is not accurate enough to resolve discrete values of some observable (e.g., spin). This, however, is not true. In this case, of course, you need more than Born's rule, but you need Born's rule to calculate probabilities for precisely measuring the observable and then on top you need a description of the "detector acceptance and resolution". Usually that's empirically determined using "calibrated probes". Nevertheless the fundamental connection between the QT formalism and what's observed in experiments is still Born's rule. Of course, the cases, where you can apply Born's rule in its fundamental form are rare, because it's usually difficult to build very precise measurement devices, but this doesn't invalidate Born's rule as a fundamental part of the (minimal) interpretation of QT to make it applicable to real-world experiments.

Also the often cited formalism of POVM, which generalizes Born's rule to more general "inaccurate measurements" is based on Born's rule.
 
  • Like
Likes Auto-Didact
  • #304
PeterDonis said:
Which says:
In other words, the "fundamental concept" appears to be relative frequency--i.e., statistics. So I still don't understand your statement that probability is a "fundamental concept" while statistics is "derived".
Indeed. There is also a debate about the general meaning of probabilities in application to empirical facts (statistics), independent of QT. Some people seem to deny the meaning of probabilities as "frequencies of occurance" when a random experiment is repeated on an ensemble of equally prepared setups of this experiment. Nobody, particularly not Qbists (another modern "interpretation" of QT), could ever convincingly explain to me, how I should be able to empirically check a hypothesis (i.e., assumed probabilities or probability distributions associated with a random experiment) if not using the usual "frequentist interpretation" of probabilities. It is also clear that probability theory does not tell you which probability distribution might be a successful description, but you need to "guess" somehow the probabilities for the outcome of random experiments and the verify or falsify them by observation. On the other hand the frequentist interpretation has a foundation in probability theory in terms of theorems like the "Law of Large Numbers", and thus there's a clear foundation of the "frequentist interpretation" within probability theory itself, and this is a convincing argument for this interpretation, which then makes probability theory applicable to concrete real-world problems, in giving the foundation for the empirical investigation about assume probabilities/probability distributions.

In extension to pure probability theory (as, e.g., formalized by the Kolmogorov axioms) there are also ideas about how to "guess" probabilities. One is the maximum entropy method, which defines a measure for the missing information (classically the Shannon entropy) which has to be maximized under the constraint of given information about the system one aims to describe by a probability function or distribution. Of course, it doesn't tell you which information you should have to get a good guess for these probabilities in a given real-world situation.
 
  • Like
Likes Mentz114
  • #305
vanhees71 said:
It is also clear that probability theory does not tell you which probability distribution might be a successful description, but you need to "guess" somehow the probabilities for the outcome of random experiments and the verify or falsify them by observation.
Isn't that how all science works?

A probabilty theory takes a physical idea (e.g. kinematics of particle collisions) and adds in a random principle and deduces a distribution for some variable (e.g. a normal distribution). Adding in a random principle is just like any other hypothesis in a scientific theory.

The whole point of such a theory, just like any other theory, is to predict, not to count. And its applications extend far beyond just predicting frequencies. For example, financial derivative pricing theory is critically based on the theory of random processes.

vanhees71 said:
On the other hand the frequentist interpretation has a foundation in probability theory in terms of theorems like the "Law of Large Numbers", and thus there's a clear foundation of the "frequentist interpretation" within probability theory itself, and this is a convincing argument for this interpretation, which then makes probability theory applicable to concrete real-world problems, in giving the foundation for the empirical investigation about assume probabilities/probability distributions.

In extension to pure probability theory (as, e.g., formalized by the Kolmogorov axioms) there are also ideas about how to "guess" probabilities. One is the maximum entropy method, which defines a measure for the missing information (classically the Shannon entropy) which has to be maximized under the constraint of given information about the system one aims to describe by a probability function or distribution. Of course, it doesn't tell you which information you should have to get a good guess for these probabilities in a given real-world situation.
But, of particular relevance to the Born rule, one can encode probabilities in many ways other than directly hypothesizing a distribution. One simply builds a theory of some quantity f(x) and then expresses P(x) as a unique function of f(x). The Born rule says to do that in a specific way via the scalar product. And, as I have tried to explain, this is quite profound because, given the usual Hilbert space picture, a moderately stable universe in which small transitions are more likely than large transitions will suggest (but no, it doesn't prove) that P(x) be a monotonically increasing function of the magnitude of the scalar product.
 
Last edited:
  • Like
Likes Auto-Didact
  • #306
mikeyork said:
It's a trivially simple logical observation about the nature of the Born rule

Ok, so you're saying that this...

mikeyork said:
one can encode probabilities in many ways other than directly hypothesizing a distribution. One simply builds a theory of some quantity f(x) and then expresses P(x) as a unique function of f(x).

...is a "trivially simple logical observation", and so if I look in any textbook on probability theory, I will see it referred to? And then the addition of this...

mikeyork said:
The Born rule says to do that in a specific way via the scalar product.

...is a "trivially simple logical observation" that I will see in any textbook on QM?

mikeyork said:
what this thread was originally about until you and Mentz114 derailed it.

IIRC you were the one who brought up the idea of a "fundamental concept of probability" independent of statistics. That seems to me to be a thread derail, since Born's rule only claims to relate squared moduli of amplitudes to statistics of ensembles of observations.
 
  • #307
mikeyork said:
But, of particular relevance to the Born rule, one can encode probabilities in many ways other than directly hypothesizing a distribution. One simply builds a theory of some quantity f(x) and then expresses P(x) as a unique function of f(x). The Born rule says to do that in a specific way via the scalar product. And, as I have tried to explain, this is quite profound because, given the usual Hilbert space picture, a moderately stable universe in which small transitions are more likely than large transitions will suggest (but no, it doesn't prove) that P(x) be a monotonically increasing function of the magnitude of the scalar product.
I don't understand what you are after with this. Could you give a simple physics example? Formally in QT it's clear

If you have a state, represented by the statistical operator ##\hat{\rho}##, then you can evaluate the probability (distribution) to find a certain value of any observable you want. Measuring an arbitrary observable ##A## on the system, represented by the self-adjoing operator ##\hat{A}##, which has orthonormalized (generalized) eigenvectors ##|a,\lambda \rangle## (##\lambda## is some variable or a finite set of variables labelling the eigenstates of ##\hat{A}## to eigenvalue ##a## as
$$P(a)=\sum_{\lambda} \langle a,\lambda|\hat{\rho}|a,\lambda \rangle.$$
 
  • #308
PeterDonis said:
...is a "trivially simple logical observation", and so if I look in any textbook on probability theory, I will see it referred to?
Any textbook that discusses a lognormal distribution gives you an explicit example: ##f(x) = log x##, ##P(x) = G(f(x))## where G() is a Gaussian. Almost any book on stochastic processes will explain why the Ito Arithmetic Brownian process for ##log x## with the solution ##P(x) = G(log(x))## is more natural (as well as simpler to understand) than trying to express the Geometric Brownian process for ##x## directly. (It's because it is scale-independent.)

PeterDonis said:
...is a "trivially simple logical observation" that I will see in any textbook on QM?
Mostly yes, though some may express it differently: ##f(a) = |<a|\psi>|##, ##P(a|\psi) = f(a)^2## That is Born's rule.
PeterDonis said:
.IIRC you were the one who brought up the idea of a "fundamental concept of probability" independent of statistics.
Always in the context of Born's rule until others, such as yourself, interjected with your primitive view of probability.
 
  • Like
Likes Auto-Didact
  • #309
vanhees71 said:
I don't understand what you are after with this. Could you give a simple physics example? Formally in QT it's clear

If you have a state, represented by the statistical operator ##\hat{\rho}##, then you can evaluate the probability (distribution) to find a certain value of any observable you want. Measuring an arbitrary observable ##A## on the system, represented by the self-adjoing operator ##\hat{A}##, which has orthonormalized (generalized) eigenvectors ##|a,\lambda \rangle## (##\lambda## is some variable or a finite set of variables labelling the eigenstates of ##\hat{A}## to eigenvalue ##a## as
$$P(a)=\sum_{\lambda} \langle a,\lambda|\hat{\rho}|a,\lambda \rangle.$$
Prepare a state ##\psi##. Project into the representation ##A## with eigenvalues ##a_i##:

## |\psi> = \sum_i |a_i><a_i|\psi>##

Born's rule tells you that if you try to measure ##A##, then ##P(a_i|\psi) = |<a_i|\psi>|^2##

I really have no idea why anyone should have such difficulty with this.

As regards the relevance of small transitions v. big transitions. First consider the analogy of Cartesian vectors. Two unit vectors that are close to each other will have a large scalar product compared to two vectors that are nearly orthogonal. Might two state vectors that are "near" each other in the same sense of a larger scalar product represent states that are more similar than states that have less overlap in Hilbert space? Now imagine you prepare a state within a narrowly-defined momentum band, measure position as lightly as possible, then measure momentum as lightly as possible, then would you expect the measured momentum to be nearer its original band or farther away?
 
Last edited:
  • Like
Likes Auto-Didact
  • #310
That's identical to what I wrote for the special case of a pure state, where ##|\psi \rangle## is a representant of the ray, defining this state. The statistical operator in this case is ##\hat{\rho}=|\psi \rangle \langle \psi|##.

I still don't understand, what you want to say (also in regard to #308).
 
  • #311
vanhees71 said:
I still don't understand, what you want to say (also in regard to #308).
Mathematically, yes, which is why I don't understand why you have had such difficulty with my posts. However, my interpretation is different.

There are two related things that I have tried to bring to this thread in order to make a simple remark about the Born rule being a very natural interpretation of the scalar product. I'll make one last attempt.

1. Historically we tend to be fixated on equating "probability" with a distribution function which equates to asymptotic relative frequency. But if we think of "probability" as a more amorphous idea which is not necessarily a distribution function but something which enables us to calculate a unique distribution function, then any mathematical encoding that does this would do in principle. In particular, an encoding ##f(a)## for which ##P(a) = G(f(a))## where ##G(f)## is monotonically increasing in ##f## gives ##f(a)## an appropriate significance. In QM, the Born rule suggests that the scalar product, ##f(a) = |<a|\psi>|## is one such encoding. There is nothing in this idea except a simple revision of the concept of probability that distinguishes it from, yet enables us to calculate, a distribution function; the scalar product is the fundamental underlying idea. If you don't like this and want to stick with probability as meaning a distribution function, then fine. I'm just pointing out that probability can be a much more general idea and in QM the scalar product serves this purpose well.

2. The relative stability of the universe -- change is gradual rather than catastrophic -- gives a natural significance to the scalar product in QM as serving as a probability encoding. You just have to interpret this gradual change as meaning that transitions between states that are "close" to each other in the sense of a large scalar product are more likely than transitions between states that are less "close". This clearly suggests that ##P(a|\psi)## should be a monotonically increasing function of ##|<a|\psi>|##.

And this is why I say that ##|<a|\psi>|## offers a "natural" expression of "probability" in QM. I am not saying it is a proof; just that it is a very reasonable and attractive idea. I also think it suggests that ##|<a|\psi>|## offers a deeper, more fundamental, idea of "probability" than a simple distribution function. But this is a secondary (and primarily semantic) issue that you can ignore if you wish.
 
  • Like
Likes Auto-Didact
  • #312
The quote in my last post #311 should have read
vanhees71 said:
That's identical to what I wrote for the special case of a pure state, where ##|\psi \rangle## is a representant of the ray, defining this state.
for my remark "Mathematically, yes,..." to make sense. Sorry about that, I don't know how it got screwed.
 
  • #313
Mike, another thought, along similar lines. The scalar product is a function of the amplitude of the initial and final state. The final state is an eigenstate of the measuring equipment. If the measuring equipment is sufficiently macroscopic the final state will be pretty close to the first state in many scenarios. The Born rule arises then as a neat approximation.
 
  • #314
Jilang said:
Mike, another thought, along similar lines. The scalar product is a function of the amplitude of the initial and final state. The final state is an eigenstate of the measuring equipment. If the measuring equipment is sufficiently macroscopic the final state will be pretty close to the first state in many scenarios. The Born rule arises then as a neat approximation.
I would say that the final state of the apparatus will be close to its initial state, but such a small change in the apparatus could be compensated by what is a relatively significant change in the observed microscopic system. (E.g. the energy/momentum exchanged between apparatus and, say, a particle, might be small compared to the apparatus, but large compared to the particle.)

Also remember that in a scattering experiment, for instance, or particle decay, the apparatus does not take part in the actual transition, it merely prepares the collision and detects the individual resultant particles after they leave the collision location. So we still need a hypothesis that small transitions in the total system are more likely, whether macroscopic or microscopic -- i.e. the Born rule. Then we can view this as the reason for a fairly stable (gradually evolving) universe.
 
  • #315
mikeyork said:
''
So we still need a hypothesis that small transitions in the total system are more likely, whether macroscopic or microscopic -- i.e. the Born rule. Then we can view this as the reason for a fairly stable (gradually evolving) universe.
I don't understand. What transitions ? If these transitions are multifactorial noise then all you are saying is that their spectrum tends to a Gaussian of mean zero - viz. small changes are more likely than large ones.
 
  • #316
I don't understand what you don't understand. Just about all of QM is about transitions and their probabilities -- including observation and the projection of a prepared eigenstate in one representation into a different representation. Why on Earth are you introducing "multifactoral noise"? It has nothing to do with the Born rule. Let's not get off-topic again, please.
 
  • Like
Likes Auto-Didact
  • #317
mikeyork said:
So we still need a hypothesis that small transitions in the total system are more likely, whether macroscopic or microscopic -- i.e. the Born rule. Then we can view this as the reason for a fairly stable (gradually evolving) universe.
I don't see why the Born rule has anything to do with 'small transitions in the total system are more likely ...'.

No doubt you've explained this so I'll read your posts.
 
  • #318
I think the discussion about the difference between probability and statistics (leaving aside the part about the connection with scalar products) of the previous pages(or as PeterDonis put it:'the idea of a "fundamental concept of probability" independent of statistics') is an instance of what the Loschmidt's or irreversibility paradox deals with. The confrontation between statistical reversibility(counting frequencies after the fact) and the predictive probabilistic introduction of irreversibility concealed in the H-theorem: Just by assuming any probabilistic model as opposed to others an element of ireversibility is conceptually inserted. But the quantum formalism unitarity is just not capable to make this distinction, it simply assumes reversibility.

The particular way in which the Born rule model of probability happens to give the right answers on top of this is not clear so far, and I'm not convinced that the suggested connection with scalar products clarifies much.
 
  • #319
Perhaps in so far as once the "for all intents an purposes" threshold or irreversibility is reached the state exhibits a "turning point" . Measured and unmeasured states become as one so to speak and the Born rule falls out.
 
  • #320
Time reversibility at the scalar product does not imply unconditional reversibility because the scalar product represents a conditional probability. The distinction is due to the fact that we are dealing with probabilities conditioned on prior circumstances. The breaking of an egg could be just as easily reversed if we could only get all the atoms together in their appropriate locations at the same time. At the QM level, particle decay amplitude does not give the same probability as reconstituting the particle from its decay products unless one can get the decay products together. You cannot reverse n ##\rightarrow## p + e + ##\nu##, unless you can collide an electron and a proton: e + p ##\rightarrow## n + ##\bar{\nu}##.
 
  • Like
Likes Auto-Didact
  • #321
Agreed. It is conditional on the basis, which is dependent on the measurement. If said measurement then results in an increase in entropy the odds of reversibility reduce.
 
  • #322
Carefully reading this thread, after all is said and done, as others have already said @A. Neumaier's thermal interpretation doesn't seem to offer anything at all of physical content in the issue of answering where probability arises from within quantum theory (QT), nor does it seem capable of resolving the infamous measurement problem. The splitting of QT into a 'shut up and calculate' and an 'interpretation' is somewhat of a caricature of the actual situation. I will try to illustrate the actual issue as simple and as clear as possible so that practically anyone can follow this discussion.

QT in practice consists of two inconsistent mathematical procedures, namely
1) a completely deterministic mathematical theory describing unitary evolution of the wavefunction,
2) an informal probabilistic rule of thumb for evaluating outcomes of experiments.
Instead of 'shut up and calculate', the theoretical and mathematical aspect of QT that physicists learn comprises 1, while 2 is a more informal postulate for the experimental side of QT.

Now it so happens that for all physical theories prior to QT, all physical theories have the format of 1, namely deterministic differential equations describable by mathematics, while not having or not having to worry about anything like 2. Many interpret this as QT being purely a mathematical model just like for example Newtonian mechanics can be viewed as a purely mathematical model in some mathematical theory, but this is plainly false in the case of QT.

The fact of the matter is that without something like 2, QT cannot be compared to experiment while for all other physical theories they can; without 2 it isn't clear at all that QT is anything more than merely some mathematical model with no clearly decidable physical content, i.e. it would not be verifiable by experiment and therefore could not be regarded as a scientific theory, regardless of the possibility that it actually was describing nature and we were merely to stupid to know how to test it.

The derivation the paper in the OP gives is as others have also said a circular argument, but with a twist, namely a novel axiomatic convention gained by redefining or generalizing probabilistic notions, and this new mathematical convention is subsequently specifically chosen in order to avoid having to admit the assumption of regular probabilistic notions, which are not contained in 1 since the probabilistic results are mathematically clearly not part of unitary evolution in the first place.

The problem however isn't merely that 1 and 2 are inconsistent in that they describe logically quite distinct things given that the mathematics of 1 describes something deterministic while that of 2 describes something probabilistic, the situation is worse i.e. one is not derivable from the other because they actually address two different 'domains' so to speak, namely what occurs intratheoretically according to the mathematical model when things are left to themselves (1) and how to compare the predictions of the model to the outcomes of experiments using 2.

Since these two procedures however together can be used to describe experiments, what is implied here is that they have a common mathematical source from which they both spring and that therefore 1 and 2 are merely disjoint sectioned off descriptions off of some more general mathematical description which has eluded us for almost a century. Using 1 to derive 2 is therefore a non-starter; the ones here who intuitively recognize and/or immediately accept this argument are psychologically probably more experimentally inclined opposed to the mathematically and theoretically inclined thinkers. What is good mathematical practice for a mathematician doing mathematics according to contemporary conventions, i.e. a mathematical derivation by choosing other axioms and ending up with indistinguishable consequences will never pass as good mathematical practice for physics given that the extent of physical theory remains experimentally undetermined.

This thread would serve as a very interesting and revealing case study of the rampantly occurring different kind of attitudes and point of views (NB: mathematician, mathematical physicist, theoretician, experimenter, probability theorist, statistician, not even to mention different points of view within mathematics...) occurring, interacting and competing here in the discussion and interpretation of an unsolved issue in physics among physicists.

Without a doubt these intersubjectivities also occur in the actual practice of physics among different practitioners and cause intense disagreement. This leads to the creation of scientist factions upholding some particular point of view. It is clear that the subjectivities on display are idiosyncratic psychological consequences based in large part on particular ways of looking at things coherent with the person's specific form of academic training, including group-based and personally developed biases about what is (more) fundamental.

For any outsider, especially an informed layman, the intersubjectivities displayed here would probably be very surprising given the 'objectivity' everyone expects from physics, and certainly the usually objective nature of mathematics, but that is a red herring: although physics and mathematics may be objective, physicists and mathematicians certainly are not. Scientific arguments are only true or objective relative to some theory or some preliminary theory; a theory need not be true for an argument based upon it to be objective and whether that theory is valid or not, is a matter to be decided by a carefully analyzed experiment.

All along the way of the actual historical development of mathematics and physics, intuitions, which haven't always been readily explainable, were constantly made and they are only objective relative to some specific mathematical theory given that it exists in the first place; to a rival mathematical theory what else can these intuitions be but subjective? To deny this aspect of mathematics is not merely not to be objective, but to be subjective and wholly blinded of being so. In the practice of mathematics, respected intuitions are called conjectures, while in physics they are called hypotheses.

In stark contrast with what a scientist may think or claim, the choice between scientific arguments is almost a completely subjective one prior to experiment. The clear subjectivity of many scientific arguments do not merely occur in empirically based sciences like psychology and sociology, they also run rampant in quantitative sciences like physics and economics, only here the subjectivity is hidden within the mathematics. Moreover, it seems the situation is far worse in the purely rational field of mathematics, precisely because there is no such thing as measurement capable of constraining thoughts; instead there is only deductive proof. It should be obvious that no one, not even the mathematician, proceeds by deductive reasoning alone and therefore large parts of mathematics do not rest on deductive certainty or indeed any certainty at all.

tl;dr: just listen to Feynman:
I) Physics is not mathematics and mathematics is not physics; one helps the other
II) Each piece, or part, of the whole of nature is always merely an approximation to the complete truth, or the complete truth so far as we know it. In fact, everything we know is only some kind of approximation, because we know that we do not know all the laws as yet. Therefore, things must be learned only to be unlearned again or, more likely, to be corrected. The test of all knowledge is experiment. Experiment is the sole judge of scientific “truth”.
III) The first principle is that you must not fool yourself – and you are the easiest person to fool
IV) A very great deal more truth can become known than can be proven.
V) Directed towards those who are terrible at explaining things clearly or simply refuse to do so:
Feynman said:
The real problem in speech is not precise language. The problem is clear language. The desire is to have the idea clearly communicated to the other person. It is only necessary to be precise when there is some doubt as to the meaning of a phrase, and then the precision should be put in the place where the doubt exists. It is really quite impossible to say anything with absolute precision, unless that thing is so abstracted from the real world as to not represent any real thing.

Pure mathematics is just such an abstraction from the real world, and pure mathematics does have a special precise language for dealing with its own special and technical subjects. But this precise language is not precise in any sense if you deal with real objects of the world, and it is only pedantic and quite confusing to use it unless there are some special subtleties which have to be carefully distinguished.
 
  • Like
Likes eys_physics, zonde, Mentz114 and 2 others
  • #323
Auto-Didact said:
QT in practice consists of two inconsistent mathematical procedures, namely
1) a completely deterministic mathematical theory describing unitary evolution of the wavefunction,
2) an informal probabilistic rule of thumb for evaluating outcomes of experiments.
Instead of 'shut up and calculate', the theoretical and mathematical aspect of QT that physicists learn comprises 1, while 2 is a more informal postulate for the experimental side of QT.

Hmmm. Do you think Andrew Gleason would agree with that?

Thanks
Bill
 
  • #324
bhobba said:
Hmmm. Do you think Andrew Gleason would agree with that?

Thanks
Bill
What Gleason personally would agree with I have no idea, but Gleason's theorem is a very subtle and interesting red herring in this matter. The theorem only shows that any function which assigns probability to measurement outcomes related to 1 must take the form of Born's rule.

The key point is that such a function is not at all part of 1 itself; this is clear by the fact that in proving the theorem, the function is assumed by fiat to exist at the outset of the argument i.e. the procedure of proving the theorem sneaks in an extraneous function by assumption of its existence and then demonstrates that any such possibly existing assumed extraneous functions will always have the form of Born's rule.

This theorem definitely doesn't count as an intratheoretical part of 1, i.e. it isn't part of the theoretical description of unitary evolution in any of its various mathematically equivalent forms. The theorem is instead an atheoretical justification from the context of mathematics itself for the assigning of things outside of 1, namely probabilities, to things inside 1 in the context of experiments; in other words, the theorem is a mathematics based atheoretical justification of 2.

The mathematical consistency of the existence of such an extraneous function in the empirical context of 1 only shows that something like 2 is possible; it does not in any way imply or guarantee that the intrinsic intratheoretical properties of 1 (determinism i.e. differential equations) and 2 (probability) are mathematically consistent with each other.
 
  • Like
Likes protonsarecool, Boing3000 and vanhees71
  • #325
  • Like
Likes Mentz114 and Auto-Didact
  • #326
jeffery_winkler said:
The following paper claims to derive Born's rule.

https://arxiv.org/pdf/1801.06347.pdf
I only gave this a quick readthrough but assuming their logic is valid, this is the first explanation I have ever seen which gives an actual physical reason, which supports both Wheeler's intuition on the matter and seems to give a positive answer to Bohr's conjecture. This sounds almost too good to be true. Moreover, if true, this implies that string theory is either false or at best mathematically equivalent to QFT.
 
  • #327
Auto-Didact said:
The derivation the paper in the OP gives is as others have also said a circular argument
Where is the circularity? They carefully avoid it by distinguishing between q-notions (arising in the mathematical theory with statistical names) and true statistical notions.
 
Back
Top