A How do entanglement experiments benefit from QFT (over QM)?

Click For Summary
Entanglement experiments can benefit from Quantum Field Theory (QFT) due to its ability to incorporate relativistic effects, which are crucial when reference frames impact outcomes. While non-relativistic Quantum Mechanics (QM) suffices for many entanglement scenarios, QFT is necessary for processes involving particle creation and annihilation, particularly in high-energy contexts. Discussions highlight that QFT is often implicitly used in quantum optics, even if not explicitly referenced in entanglement experiments. The consensus is that while QFT provides a more comprehensive framework, the fundamental aspects of entanglement remain consistent across both QM and QFT. Understanding the interplay between relativity and quantum mechanics is essential for addressing questions about causality and information exchange in entangled systems.
  • #331
vanhees71 said:
I'd not say QT is a generalization of probability theory but it's an extension
Perhaps you mean something subtle by "extension" vs "generalization", but standard terminology is that it is. See Streater's book or the paper by Summers I gave above.

vanhees71 said:
I don't think that this is sufficient though. Another very important ingredient in the realm many-body theory is the application of information theory, i.e., the maximum-entropy principle to QT
MaxEnt is a technique in probability theory. As you said for finding the right distribution (classical case) or the right statistical operator (quantum case).
So the formalism is "just" POVMS, states on them and a choice of unitary operators. However you might need techniques for choosing the right state, evolution operator, POVM, etc. That doesn't negate that they constitute the formalism however.
 
  • Like
Likes vanhees71
Physics news on Phys.org
  • #332
My lay-man's view of probability theory is that it provides a mathematical axiomatic system, like e.g., Kolmogorov's. That system of axioms, however just gives a framework and does not define the concrete probabilities. That's of course a feature, since it should have this flexibility.

The art of the application of this framework to real-world problems is to find successful probabilistic descriptions of the real-world situations, and QT provides one framework for it.

I think the axiomatic foundation of probability theory is particularly important to get a complete understanding of QT, precisely for the reason of our current discussion: It gives a clear and "non-esoteric" meaning of the "contextuality issue", i.e., indeed to define probabilities making sense as described by the (Kolmogorov) axioms you have to define both the state (operationally defined as a preparation procedure) and the measured observables (operationally defined as some measurement procedure).

E.g., at the LHC or RHIC one meaures "dileptons" in heavy-ion collisions. The preparation procedure is to provide two beams of lead or gold nuclei with quite well-defined momentum (and thus also energy) and let them collide at specific places. I've never seen this being described as a POVM, and the accelerator physicists do very well with considering classical descriptions of the bunches (either as poin particles or in the case of larger "space-charge densities" hydrodynamics). There are many measurements done to get the dilepton spectra (i.e., the invariant-mass, transverse-momentum and rapidity spectra of electron-positron and muon-antimuon pairs). Among them are ring-imaging Cerenkov detectors: The electrons enter some material, and with appropriate photon detectors one reconstructs the rings from the "Cerenkov cones". Of course, only these two elements together, i.e., the preparation procedure and the measurment device define the complete "random experiment" in the sense of the Kolmogorov axioms.
 
  • Like
Likes Mentz114 and DarMM
  • #333
DarMM said:
This will be my last post on this.

That's not a counter-example. You've shown that the objects in quantum theory can be embedded in an infinite dimensional object not in quantum theory.

A Gelfand homomorphism is a map that takes C*-algebra elements and maps them to functions over a manifold. This manifold is then the sample space.
Quantum theory's observable algebra lacks a Gelfand homomorphism that covers all of the algebra. Thus it does not have one sample space. The end.

What you are doing is finding an algebra with infinite degrees of freedom with the quantum algebra embedded as a subset. Note though it's not a subalgebra, the embedding destroys some algebraic properties. Then the fact that this much larger algebra, with observables never seen in a lab, has one sample space you are taking as implying QM has one sample space.
In a sense I think you and Elias1960 are actually agreeing, don't let his argumentative style drag you. Certainly quantum probability is formally inserted in a generalization-or extension as vanhees put it, it doesn't make much difference without a clear definition- from classic probability(and that much was admitted "mathematically" by Elias1960) but it is also true that the generalization is (quite loosely in a way but that is the standard in physical theories, especially in quantum field theory where a rigorous mathematization is pendent) at the moment set in an infinite dimensional space that allows to allude to "one sample space" formally, even if it sounds morally wrong in physical terms.

The alternative, to claim that quantum theory has a formalism with its own probability to the exclusion of the classical one(rather than a formalism flexible enough to incorporate both without fatal contradictions which is the role of infinite dimensions here-the whole purpose of functional analysis in quantum theory I'd say ) amounts to saying that quantum theory has a logic of its own, with failing distributive laws for its propositions, and this would make impossible the necessary contact the theory has to make with classical physics, banning all semiclassical approaches or even the use of measurements results like physical constants values.

So IMO even if it is tempting and even morally acceptable in a way to claim that there is no longer a single sample space, at the moment , formally at least, it seems like there is.
 
  • #334
vanhees71 said:
My lay-man's view of probability theory is that it provides a mathematical axiomatic system, like e.g., Kolmogorov's. That system of axioms, however just gives a framework and does not define the concrete probabilities
Let me say it this way. Absolutely the general framework doesn't give you the specific probabilities. However the general framework does specify how probabilities can possibly "mesh" together, i.e. it gives rules for how to relate sets of probabilties that hold regardless of what specific values they have. Kolmogorov's theory (i.e. classical probability) leads to a very specific set of meshing rules, one example being the Total Law of Probability.

It then turns out experimentally that some real world probabilities, such as those found in atomic or sub-atomic scale experiments, do not obey those meshing rules. Thus we need a more general theory of how probabilities interrelate than those found in Kolmogorov/classical probability. That generalization is quantum probability theory.

vanhees71 said:
The preparation procedure is to provide two beams of lead or gold nuclei with quite well-defined momentum (and thus also energy) and let them collide at specific places. I've never seen this being described as a POVM
One doesn't always need a POVM. In many cases a PVM will do. POVMs are simply the most general notion.
 
  • Like
Likes mattt
  • #335
Tendex said:
but it is also true that the generalization is at the moment set in an infinite dimensional space
From Hardy's infinite ontological baggage theorem it must always be infinite dimensional, not just that at the moment that's the only way we can do it.

Tendex said:
this would make impossible the necessary contact the theory has to make with classical physics
No because macroscopic observables end up commuting (for various reasons such as decoherence, Pitwosky's lack of entanglement witnesses, etc), which means they have classical statistics and thus one recovers classical physics. Since quantum probability is more general than classical probability it can contain classical probability.

Observables in general don't live in a single sample space, but macroscopic observables do. That's all there is to it.

Tendex said:
So IMO even if it is tempting and even morally acceptable in a way to claim that there is no longer a single sample space, at the moment , formally at least, it seems like there is
No. QM does not have a single sample space. That is a fact of the formalism due to it not having a Gelfand homomorphism that covers the entire algebra.
"Formally" there is an infinite dimensional sample space of an alternate theory that is not QM where the QM algebra appears as a subset (not subalgebra crucially).
If we were to use your language we would have to say:
"So IMO even if it is tempting and even morally acceptable in a way to claim that in General Relativity spacetime is not flat, at the moment , formally at least, it seems like it is flat"

As I said this entire line of discussion is like saying we should acknowledge that all spacetimes in General Relativity can be embedded in a 231-D Minkowski spacetime and for that reason "strictly speaking" spacetime is not curved. Nobody would do this as:
  1. In order to explain our observations you have to come up with a restriction, i.e. for some reason we are confined to a 4D hypersurface. Just as in such an infinite dimensional sample space replacing QM we are confined epistemically
  2. It's not what General Relativity says, but what an alternate unevidenced Special Relativity says. Exactly so a single space is mathematically not what QM says provably, but what an alternate unevidenced classical probabilistic theory says.
 
  • Like
Likes mattt, Auto-Didact, weirdoguy and 1 other person
  • #336
DarMM said:
From Hardy's infinite ontological baggage theorem it must always be infinite dimensional, not just that at the moment that's the only way we can do it.No because macroscopic observables end up commuting (for various reasons such as decoherence, Pitwosky's lack of entanglement witnesses, etc), which means they have classical statistics and thus one recovers classical physics. Since quantum probability is more general than classical probability it can contain classical probability.

Observables in general don't live in a single sample space, but macroscopic observables do. That's all there is to it.No. QM does not have a single sample space. That is a fact of the formalism due to it not having a Gelfand homomorphism that covers the entire algebra.
"Formally" there is an infinite dimensional sample space of an alternate theory that is not QM where the QM algebra appears as a subset (not subalgebra crucially).
If we were to use your language we would have to say:
"So IMO even if it is tempting and even morally acceptable in a way to claim that in General Relativity spacetime is not flat, at the moment , formally at least, it seems like it is flat"

As I said this entire line of discussion is like saying we should acknowledge that all spacetimes in General Relativity can be embedded in a 231-D Minkowski spacetime and for that reason "strictly speaking" spacetime is not curved. Nobody would do this as:
  1. In order to explain our observations you have to come up with a restriction, i.e. for some reason we are confined to a 4D hypersurface. Just as in such an infinite dimensional sample space replacing QM we are confined epistemically
  2. It's not what General Relativity says, but what an alternate unevidenced Special Relativity says. Exactly so a single space is mathematically not what QM says provably, but what an alternate unevidenced classical probabilistic theory says.
What is that alternate theory that is not quantum theory you refer to?
 
  • #337
Tendex said:
What is that alternate theory that is not quantum theory you refer to?
What ever retrocausal or nonlocal hidden variable theory is giving the infinitely large sample space.
 
  • #338
DarMM said:
What ever retrocausal or nonlocal hidden variable theory is giving the infinitely large sample space.
Ok, that's regular QM, only you are stressing an specific interpretation to describe it.
 
  • #339
Tendex said:
Ok, that's regular QM, only you are stressing an specific interpretation to describe it.
No, they have a completely different mathematical structure and are in fact different theories. Regular QM mathematically does not have a single infinite dimensional sample space.

I don't know how you can claim this is regular QM. Show me mathematically the infinite dimensional contextual single sample in QM. You will not be able to because it doesn't have one. The observable algebra is of such a form that there isn't a single Gelfand homomorphism for it, thus it is impossible. Mathematically impossible. This has nothing to do with interpretations. The algebra in QM does not have a single sample space.
 
Last edited:
  • Like
Likes mattt and weirdoguy
  • #340
DarMM said:
No, they have a completely different mathematical structure and are in fact different theories. Regular QM mathematically does not have a single infinite dimensional sample space.

I don't now how you can claim this is regular QM. Show me mathematically the infinite dimensional contextual single sample in QM. You will not be able to because it doesn't have one. The observable algebra is of such a form that there isn't a single Gelfand homomorphism for it, thus it is impossible. Mathematically impossible. This has nothing to do with interpretations. The algebra in QM does not have a single sample space.
So you are then restricting quantum theory to the algebra of observables?Ok, but hadn't you said that the theory includes the macroscpic observables(measurements) and the classical probability?
 
  • #341
Tendex said:
So you are then restricting quantum theory to the algebra of observables?
I'm not restricting. The algebra of observables and states upon it constitutes the kinematics of quantum theory. What am I leaving out? There is no restriction.

Tendex said:
Ok, but hadn't you said that the theory includes the macroscpic observables(measurements) and the classical probability?
Yes I have. Macroscopic observables are a subset of the observable algebra which all commute and thus this subalgebra has classical probability.

This shows up in many places in Quantum Theory where the observable for electric charge for example has only classical probability. Subsets of the observable algebra can have classical probability. Electric charge observables are one example, macroscopic observables are another.
 
  • Like
Likes mattt and Jimster41
  • #342
DarMM said:
I'm not restricting. The algebra of observables and states upon it constitutes the kinematics of quantum theory. What am I leaving out? There is no restriction.Yes I have. Macroscopic observables are a subset of the observable algebra which all commute and thus this subalgebra has classical probability.

This shows up in many places in Quantum Theory where the observable for electric charge for example has only classical probability. Subsets of the observable algebra can have classical probability. Electric charge observables are one example, macroscopic observables are another.
So including such measurements plus the reasonable assumption that they don't influence each other in a faster than light way seems to me that always should allow to a formalism that tries to meet these premises(acknowledging that this formalism hasn't been rigorously found yet as I commented previously) to abstract to a simple sample space the macroscopic measurements.
This notwithstanding that one is of course (given all that you have also explained) always free to use more than one sample space in the description of a certain quantum experiment or lab setting.

I think your examples using GR are not useful here as everybody knows GR is a classical theory, perhaps if we had a quantum gravity theory they could apply but it's not the case.
 
  • #343
Tendex said:
So including such measurements plus the reasonable assumption that they don't influence each other in a faster than light way
What measurements are we including here?

Tendex said:
I think your examples using GR are not useful here as everybody knows GR is a classical theory
You're missing the point of the analogy. It's not about whether GR is classical or not or whether people know that. The classicality of GR is beside the point.

The point is that all of GR's manifolds can be embedded in the Minkowski space of a much higher dimensional Special Relativity. Thus we can recast GR as a subset of a different theory with far more degrees of freedom with a highly unnatural restriction. Thus it is for QM and these single sample space theories. Both can be recast as a subset of a larger theory with a highly unnatural restriction. In both cases the larger theory, in addition to the subset that replicates QM/GR, has elements that have never experimentally been confirmed.

Thus in both cases there is no reason to cast doubt on the statements in the actual theory such as "there is more than one sample space" or "spacetime is curved".
 
  • #344
DarMM said:
What measurements are we including here?You're missing the point of the analogy. It's not about whether GR is classical or not or whether people know that. The classicality of GR is beside the point.

The point is that all of GR's manifolds can be embedded in the Minkowski space of a much higher dimensional Special Relativity. Thus we can recast GR as a subset of a different theory with far more degrees of freedom with a highly unnatural restriction. Thus it is for QM and these single sample space theories. Both can be recast as a subset of a larger theory with a highly unnatural restriction. In both cases the larger theory, in addition to the subset that replicates QM/GR, has elements that have never experimentally been confirmed.

Thus in both cases there is no reason to cast doubt on the statements in the actual theory such as "there is more than one sample space" or "spacetime is curved".
Oh, but I'm not casting doubt on those statements, I'm saying that functional analysis allows us to make them compatible with the statement of allowing one sample space, unless one is rejecting classical mathematical logic as the basis of quantum theory which I don't think you are doing.
As for GR, you have actually people like Kip Thorne and actually all particle physicists I know making compatible the idea of curvature and " flatness" of the infinite dimensional space one needs (to have general covariance anyway) and it is of course never to exclusion of curvature or multiple sample spaces in this case, but it also allows the abstraction to define one sample space to the extent that macroscopic local measurements is all that we have access to in physics and assuming no ftl and local gauge they are at some level random in the classical sense of approximately equal likeliness.
 
  • #345
Tendex said:
I'm saying that functional analysis allows us to make them compatible with the statement of allowing one sample space
Only with an artificial restriction and also it's a contextual sample space containing an infinite number of degrees of freedom nobody has ever observed. Why is this even being discussed?

Quantum Theory does not have a single sample, that is a mathematical fact. I have given Streater and Summers as two experts in the area who state this. If you disagree show me a construction of a single sample space that does not postulate an infinite number of observables unconfirmed by actual observations.

Tendex said:
As for GR, you have actually people like Kip Thorne and actually all particle physicists I know making compatible the idea of curvature and " flatness" of the infinite dimensional space one needs (to have general covariance anyway) and it is of course never to exclusion of curvature or multiple sample spaces in this case, but it also allows the abstraction to define one sample space to the extent that macroscopic local measurements is all that we have access to in physics and assuming no ftl and local gauge they are at some level random in the classical sense of approximately equal likeliness.
Show me this construction by Kip Thorne. I've never seen it.
 
  • #346
DarMM said:
Only with an artificial restriction and also it's a contextual sample space containing an infinite number of degrees of freedom nobody has ever observed. Why is this even being discussed?

Quantum Theory does not have a single sample, that is a mathematical fact. I have given Streater and Summers as two experts in the area who state this. If you disagree show me a construction of a single sample space that does not postulate an infinite number of observables unconfirmed by actual observations.


Show me this construction by Kip Thorne. I've never seen it.
Noone has ever observed infinite dimensions for that matter. Only an infinity of observations outside physics can confirm an infinite number of dof but if we are talking about the mathematics that support the physics you would have ask me also for evidence of the elements of infinite sets that are used in quantum theory for your demand to make sense.

But you have not explained how all equiprobable macroscopic local measurements(the only ones possible) in the sense of not influencing each other ftl at spacelike separation can't use infinite dimensional space(at the very base of the theory) to form a sample space.
 
  • #347
If I understand it right, what @DarMM refers to is the fact that, in contradistinction of classical (statistical) physics, in QT the sample space "all possible observables on the system" does not make sense, i.e., there are no states (pure or mixed) for which all observables take predetermined, yet maybe unknown, values. That's not only a mathematical fact about QT but seems to be pretty sure to be an empirical fact too, as the many Bell tests, all of which confirming QT rather than any possible local deterministic hidden-variable model, show!
 
  • Like
Likes Mentz114 and DarMM
  • #348
Tendex said:
Noone has ever observed infinite dimensions for that matter. Only an infinity of observations outside physics can confirm an infinite number of dof but if we are talking about the mathematics that support the physics you would have ask me also for evidence of the elements of infinite sets that are used in quantum theory for your demand to make sense.
I'm not talking about infinite spatial dimensions. You don't even need to make an infinity of observations. I'm talking about a mathematical fact of the theory. Show me a single sample space that doesn't need to postulate an infinite number of additional degrees of freedom.

The resulting sample space has observables far more general than those in quantum theory. Not even a finite subset of these have been seen. Not even one of them has been seen.

I'll be frank, I don't think you really understand the single sample hidden variable theories construct and you are confusing several concepts. Have you gone through Hardy's infinite ontological baggage theorem? If not I'd read up about and go through the prove.

Tendex said:
But you have not explained how all equiprobable macroscopic local measurements(the only ones possible) in the sense of not influencing each other ftl at spacelike separation can't use infinite dimensional space(at the very base of the theory) to form a sample space.
"At the very base of the theory"? What does this mean?
You do realize that such an infinite dimensional sample space contains several observables that don't correspond to anything we've ever seen right?
 
  • #349
vanhees71 said:
If I understand it right, what @DarMM refers to is the fact that, in contradistinction of classical (statistical) physics, in QT the sample space "all possible observables on the system" does not make sense, i.e., there are no states (pure or mixed) for which all observables take predetermined, yet maybe unknown, values. That's not only a mathematical fact about QT but seems to be pretty sure to be an empirical fact too, as the many Bell tests, all of which confirming QT rather than any possible local deterministic hidden-variable model, show!
Precisely the part you have in bold.

To restore the idea of a sample space for all possible observables of the system we have to postulate an infinite number of degrees of freedom nobody has ever seen in a lab, that's basically what Hardy's theorem says. So sure you can make such an infinite dimensional sample space, but who cares that you can do this, none of those things have been seen in a lab.

It's exactly like the fact that "mathematically" any spacetime in GR can be embedded in a 231-D Minkowski background. Who cares? We have no evidence of those additional 227 dimensions.
 
  • Like
Likes mattt and vanhees71
  • #350
Indeed, isn't this the most general surprising, for many physicists of the first "quantum generation" even disturbing, discovery of QT to begin with: No matter how accurately you may be able to prepare a system in (and the most "accurate" states possible are just the pure states, i.e., ##\hat{\rho}## is a projection operator) almost all observables do not take determined values but only a set of compatible observables (and functions thereof)?

That's the "danger" of getting involved with the natural sciences: It may happen that your learn something completely new about the natural world, as far as objective facts about it are concerned, that contradict worldviews that seemed very much confirmed by "common sense"! I think, nowadays most physicists are not disturbed anymore by this big surprised, simply because they are used to it by just learning the most recent physical worldview. I think it's save to say that any standard curriculum in physics on any level aims at to provide an understanding to some degree of modern quantum theory as the most comprehensive scientific world view we have today, and only philosophers still have some quibbles with it.

It's even getting further: These very foundational issues, some decades only present in gedanken experiments, become the standard not only in the (quantum optician's) lab but become part of engineering today. It's even dubbed the "2nd quantum revolution" in the popular press, i.e., the development of technology based on the "very disturbing quantum weirdness" of the founding father, using entanglement in practical applications, which may become soon everyday tools as today are computers, mobile phones and all that (realizing also fundamental physics of the 19th and 20th century like electromagnetism (electrics in each household) and quantum mechanics (semiconductor electronics in our beloved cell phones, tablets, and PCs of all kinds). One example that's already realized (though not yet in common use) is "quantum cryptography" (recently used for save communication between Austria and China via satellite communication). I also guess that "quantum computers" become also realized pretty soon though it may take still some time until I can buy my first quantum-personal computer to put on my desk ;-))).
 
  • Like
Likes mattt and DarMM
  • #351
vanhees71 said:
isn't this the most general surprising, for many physicists of the first "quantum generation" even disturbing, discovery of QT to begin with: No matter how accurately you may be able to prepare a system in (and the most "accurate" states possible are just the pure states, i.e., ##\hat{\rho}## is a projection operator) almost all observables do not take determined values but only a set of compatible observables (and functions thereof)?
This is only due to the traditional fiction that observables should be thought of something else than functions of the state.

The thermal interpretation exchanges this fiction by postulating that whatever is observable is a function of the state ##\rho## of the system, and everything becomes rationally understandable again.
 
  • #352
Observables are something else than functions of the state. It's the most important first thing you have to learn about QT to make sense of it at all. Again, your "thermal interpretation" is not a satisfactory substitute for the standard minimal interpretation. For me it's even clearly violating empirical facts since we do not in general simply get quantum mechanical expectation values as the outcome of measurements!
 
  • #353
Tendex said:
you have actually people like Kip Thorne and actually all particle physicists I know making compatible the idea of curvature and " flatness" of the infinite dimensional space one needs (to have general covariance anyway)

Where are you getting this from?
 
  • Like
Likes weirdoguy
  • #354
vanhees71 said:
Observables are something else than functions of the state. It's the most important first thing you have to learn about QT to make sense of it at all.
It is the only thing one has to unlearn again to make intuitive sense of quantum mechanics.
vanhees71 said:
For me it's even clearly violating empirical facts since we do not in general simply get quantum mechanical expectation values as the outcome of measurements!
Since the thermal interpretation never claimed this, you argue against an irrelevant caricature of it.

It is only claimed that we get approximations to quantum mechanical expectation values with an error of at least the quantum mechanical uncertainty (e.g., for spin with an error of ##O(\hbar)##) as the outcome of measurements! This is true for all the standard experiments.
 
Last edited:
  • Like
Likes julcab12
  • #355
This is not true at all. Already Stern and Gerlach in 1924 got the value of the electron magnetic moment being about 1 Bohr magneton within a few percent not with a 100% error as you claim. It's well known that nowadays it's among the most precise measured values ever. For the comparison to contemporary theory you need high-order standard-model-loop corrections on the theory side either!
 
  • #356
vanhees71 said:
Already Stern and Gerlach in 1924 got the value of the electron magnetic moment being about 1 Bohr magneton within a few percent not with a 100% error as you claim.
The experiment just produced a bimodal distribution of measurement results.
The interpretation of this distribution depends of course on the interpretation!

In the thermal interpretation, the error is measured as the difference to the expectation value (as always in statistics), not to the nearest mode.

The two peaks of this distribution are narrow with a width of a few percent, but the difference to the expectation value is of the order of 100%, as claimed by the thermal interpretation.
 
  • #357
Elias1960 said:
That there are tons of "generalizations" in the mathematical sense is a triviality. Remove whatever axiom you do not like most, and you have a generalization in the mathematical sense.
Most of the purely formal generalizations are uninteresting because they are just a formalist game with no meaning except in the abstract formal sense, but this is not true of all generalizations. I specifically am speaking about applications either in physics or some other field which lead to generalizations of probability; the difference between these and the former is that they directly come from empirical practice as data instead of from formalists playing hide and seek with axioms.
Elias1960 said:
Nice example - but it only shows that the interpretations which treat those negative things as probabilities are nonsense.
There are sophisticated applied mathematical models of entanglement built upon such objects which trivially subsume probability theory and at the same time are capable of unifying wide swaths of branches in mathematics in the process; to just ignore all of this based purely on the ideological reasons you posit, is to halt the march of science.
Elias1960 said:
The point is not that it has been axiomatized. The point is the particular axiomatization given by Cox and Jaynes of the logic of plausible reasoning. To generalize it means, essentially, to accept forms of plausible reasoning so that using different ways to argue would lead to different results, in other words, it would allow inconsistent reasoning
That is a very specific philosophy and a very premature one at that: you again assume that these axiomatizations are the logic of plausible reasoning, instead of a logic of plausible reasoning. Moreover, you are seemingly implicitly delimiting plausible reasoning to human reasoning while it has already been demonstrated empirically that there exists artificial algorithms which can reason in a totally foreign manner and while doing so sometimes get better answers than humans can with regard to certain kinds of questions.

It isn't that much of a stretch to think that this is because these algorithms are actually utilizing undiscovered forms of mathematics which of course already exist and are consistent with these generalized probability theories; that is in fact exactly what would be needed to legitimize and normalize such generalizations more within the contemporary practice of mathematics and the sciences.
Elias1960 said:
I do not care about a "concept of chance", but about the rules of consistent plausible reasoning.
There are in fact other forms of plausible reasoning which were empirically discovered and are even formally utilized in actual practice which aren't isomorphic to either Kolmogorovian or Coxian axiomatization: possibility theory, quantum probability and fuzzy logic, just to name a few. Especially in our modern computational era - which will some day be seen as the golden age of neural networks - where such alternate models are actually being implemented and studied not just as abstractions but as applied constructions, your stance is scientifically simply completely unjustifiable.
 
Last edited:
  • #358
A. Neumaier said:
The experiment just produced a bimodal distribution of measurement results.
The interpretation of this distribution depends of course on the interpretation!

In the thermal interpretation, the error is measured as the difference to the expectation value (as always in statistics), not to the nearest mode.

The two peaks of this distribution are narrow with a width of a few percent, but the difference to the expectation value is of the order of 100%, as claimed by the thermal interpretation.
The final version of the SGE was much more than you claim. It was a careful quantitative analysis, confirming the magnetic moment of the electron being 1 Bohr magneton with a few % accuracy:

W. Gerlach, O. Stern, Über die Richtungsquantelung I am Magnetfeld, Ann. Phys. (Leipzig) 379, 673 (1924)
https://doi.org/10.1002/andp.19243791602
Of course, the value from spectroscopy (Zeeman effect) most probably was much more accurate already at this time. Nowadays it's among the most accurately measured fundamental quantities.
 
  • #359
vanhees71 said:
The final version of the SGE was much more than you claim. It was a careful quantitative analysis, confirming the magnetic moment of the electron being 1 Bohr magneton with a few % accuracy:

W. Gerlach, O. Stern, Über die Richtungsquantelung I am Magnetfeld, Ann. Phys. (Leipzig) 379, 673 (1924)
https://doi.org/10.1002/andp.19243791602
Of course, the value from spectroscopy (Zeeman effect) most probably was much more accurate already at this time. Nowadays it's among the most accurately measured fundamental quantities.
Sure, but this is the determination of a parameter in the Hamiltonian, not the measurement of an observable. The thermal interpretation differs from tradition only in the latter. For parameter determination there is no significant difference to the tradition.

Thus your observation does not affect the validity of the thermal interpretation.
 
  • Like
Likes mattt
  • #360
Of course, from a fundamental-physics point of view at the end you can say that all measurements are a determination of some parameter in a Hamiltonian.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
3K
Replies
58
Views
4K
  • · Replies 1 ·
Replies
1
Views
466
Replies
1
Views
1K
  • · Replies 69 ·
3
Replies
69
Views
7K
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 115 ·
4
Replies
115
Views
9K