A How do entanglement experiments benefit from QFT (over QM)?

Click For Summary
Entanglement experiments can benefit from Quantum Field Theory (QFT) due to its ability to incorporate relativistic effects, which are crucial when reference frames impact outcomes. While non-relativistic Quantum Mechanics (QM) suffices for many entanglement scenarios, QFT is necessary for processes involving particle creation and annihilation, particularly in high-energy contexts. Discussions highlight that QFT is often implicitly used in quantum optics, even if not explicitly referenced in entanglement experiments. The consensus is that while QFT provides a more comprehensive framework, the fundamental aspects of entanglement remain consistent across both QM and QFT. Understanding the interplay between relativity and quantum mechanics is essential for addressing questions about causality and information exchange in entangled systems.
  • #301
Every classical probability theory is in essence the spectral theory of some set of commuting operators ##\mathcal{A}## and some normalized state on them ##\rho##.

Since all elements of ##\mathcal{A}## commute and from the spectral theorem we know that any operator can be represented as multiplication on its spectrum, we then are able to represent all of these operators as functions ##f## on some common sample space ##\mathcal{M}## with the state ##\rho## then becoming a probability measure on ##\mathcal{M}##. Gelfand's representation theorem thus tells us that all commuting C*-algebra give a probability model and that all probability models have random variables forming a C*-algebra. Thus we have a direct correspondence between Kolmogorov's theory and commutative C*-algebras.

An alternate more general probability theory can be developed then by having the C*-algebra be non-commutative. This is a decades old observation. In the same sense that non-commutative algebras are more general than commutative ones, thus quantum probability is more general than classical probability.
 
  • Like
Likes mattt, kith and dextercioby
Physics news on Phys.org
  • #302
atyy said:
I guess, reading your replies to @vanhees71, that this is due to the KS theorem. But I thought the point of the KS theorem is that QM is contextual? What has contextuality got to do with an inability to consider preparations as ensembles?
Quantum Theory doesn't have a single sample space, that immediately blocks preparations being ensembles. Unless one wants to move beyond the theory as it is that's the end of the story.

However even if you try to have one sample space, such as in a hidden variable theory, the KS theorem tells you it has to include the device as a label on the outcomes. Thus the preparation is for system-device pairs and so the device has to be mentioned.
 
Last edited:
  • #303
DarMM said:
But it simply doesn't as a mathematical fact. An ensemble is an approximate realisation of a sample space. In quantum probability the state alone does not give one a well defined sample space due to Kochen-Specker contextuality. That's all there is to it.

What would help me is understanding what you think the Kochen-Specker theorem implies. Then we could more easily discuss this since currently what you are saying seems in direct contradiction to it.
I thought that K&S only applies to unitary projections ##|\psi\rangle\langle \psi|## and thus cannot apply to momentum.
 
  • #304
Mentz114 said:
I thought that K&S only applies to unitary projections ##|\psi\rangle\langle \psi|## and thus cannot apply to momentum.
An actual momentum measurement is either a projection or a POVM to which the Kochen-Specker theorem applies.
 
  • #305
DarMM said:
An actual momentum measurement is either a projection or a POVM to which the Kochen-Specker theorem applies.
I don't see that how that follows from the first paragraph in

Two simple proofs of the Kochen-Specker theorem
Asher Peres
Phys. A: Math. Gen. 24 (1991) L175-LI78

Am I missing something ?
Do you have another reference.
 
  • #306
atyy said:
I guess it is the difference between saying that determinism is a special case of randomness (which is mathematically true, since one can use the delta measure), or saying that randomness is a special case of determinism (which is not mathematically true, but is physically true in the sense that we can consider statistical mechanics as arising from Newton's laws for many particles and ignorance of the exact state).
Scientifically - i.e. from a foundational perspective - only one of the two above is true; w.r.t. curved space in mathematics as an example: the theory of flat geometry is a subset of the more general theory of curved geometry, not the other way around.

The fact that curved space can be approximated by flat space is foundationally completely uninteresting, even if it is practically interesting precisely because you can use approximations. Mathematically this distinction is absolutely clear; in fact, almost every famous physicist of the generation who lived through the QT revolution understood these issues deeply and it is only modern physicists who don't understand it.

This misunderstanding among modern physicists happens because experimental QT went through a golden age, creating a hope or even belief that the theoretical problems would be resolved. Alas, the problems weren't resolved; instead the very context of the problem was altered. This lead modern physicists - after decades of having a more experimental edge versus a theoretical edge - to just tend to disregard and pretend such issues were 'not real, problems' exactly like their teachers told the theorists during the height of the golden age of experimental QT.

This cavalier attitude has become so dominant in the practice of physics that it even permeates almost all (undergraduate) textbooks and curricula, i.e. the bias has become institutionalized: this has lead to most physicists opting in favor of the culmination of experimental QT: the modern instrumentalist philosophy of theoretical science - which they erroneously also believe actually resolves these issues. When it happens that physicists find it normal to say that two opposite mathematical statements are actually saying the same thing then clearly there is a problem; this is hand waving at its worst!

The modern instrumentalist philosophy of theoretical science and its poster-child - i.e. SR-based QFT - does not and cannot resolve the original issues of QT; it automatically fails foundationally because SR already failed foundationally as everyone already knew back then. Instead of admitting this - as the theorists of old immediately did - they evade the question and pretend that there are no serious problems left to address; from a mathematical standpoint, this is clearly unacceptable which is exactly why it isn't accepted neither in mathematics nor in mathematical physics, hence the concerted effort of constructive QFT and so on.
 
  • #307
Mentz114 said:
I don't see that how that follows from the first paragraph in

Two simple proofs of the Kochen-Specker theorem
Asher Peres
Phys. A: Math. Gen. 24 (1991) L175-LI78

Am I missing something ?
Why would you think that paper would tell you the type of operator that represents a momentum measurement? It's talking about assigning values to PVMs in general, it's not going to go through POVMs for momentum.
 
  • #308
DarMM said:
Quantum Theory doesn't have a single sample space, that immediately blocks preparations being ensembles. Unless one wants to move beyond the theory as it is that's the end of the story.

However even if you try to have one sample space, such as in a hidden variable theory, the KS theorem tells you it has to include the device as a label on the outcomes. Thus the preparation is for system-device pairs and so the device has to be mentioned.

So does this mean that in BM with equilibrium, since the preparation includes the device, that neither BM nor QM have a single sample space?
 
Last edited:
  • #309
DarMM said:
All true. I don't know how it affects what I say though.
Quantum theory generalizes classical probability theory. That observation is decades old by now.
There is no quantum "generalization of classical probability theory" as well as "quantum logic" is not a "generalization of logic", because both "generalizations" have nothing to do with the original, "generalized" thing. Logic, as well as probability theory (which is, following Cox and Jaynes only the "logic of plausible reasoning"), define how to handle propositions which have truth values "true" or "false" and nothing else. If you look at those "generalizations", they are about something different, the "and" and "not" operations are simply not the logical "and" and "not" operations of some statements with truth values in quantum theory, they correspond to such operations only in sloppy language. So, no wonder that they do not fulfill all axioms of classical logic. The appropriate (not misleading) name for "quantum logic" is lattice theory. If there is already an established name for "quantum probability theory" I don't know. But if this construction deserves a name at all (that means, if it is interesting enough for mathematicians to care about it), it would have to be named differently to avoid confusion among physicists.

That such misleading notions cause confusion is obvious. Classical logic (as well as classical probability theory understood as the "logic of plausible reasoning" are not physical laws, but laws of thinking. As laws of thinking, they are superior to physical laws, because they are (and have to be) applied if we evaluate various proposed physical theories or think about the consequences of the outcomes of particular experiments. So, any theory which is in conflict with classical logic has to be rejected based on this fact as logically inconsistent. And the same holds for any theory which is in conflict with the logic of plausible reasoning.
DarMM said:
Can you show me where in that paper they show that quantum theory "trivially" fits into Kolmogorovian probability in a way that isn't essentially the sense @atyy and I have mentioned.
It explicitly constructs that "common sample space" the existence of which you have explicitly denied:
I wouldn't say this as even for a pure states we lack a common sample space which prevents one thinking of preparations as ensembles.
In fact, that "common sample space" always exists, based on Stone's representation theorem for Boolean algebras, and the Kochen Specker p. 63 construction is essentially the application of that theorem to quantum theory.
 
  • Like
  • Skeptical
Likes zonde and weirdoguy
  • #310
DarMM said:
Quantum Theory doesn't have a single sample space, that immediately blocks preparations being ensembles. Unless one wants to move beyond the theory as it is that's the end of the story.

However even if you try to have one sample space, such as in a hidden variable theory, the KS theorem tells you it has to include the device as a label on the outcomes. Thus the preparation is for system-device pairs and so the device has to be mentioned.
Just to add, because this combination has possibly caused some misunderstanding between us:

I'm a mathematician by education and the first claim I interpret as that such a single sample case cannot exist. Not as what is claimed in the second part, namely you can have it, but in this case, you have to make the sample space a little bit larger.
 
  • Like
Likes dextercioby
  • #311
Elias1960 said:
There is no quantum "generalization of classical probability theory"
There literally is. Debating this would be like us debating if differential geometry exists as a field. See Summer and Redei's classic introductory paper on the topic:
https://arxiv.org/abs/quant-ph/0601158
The rest of your post concerns quantum logic which I didn't mention.

Elias1960 said:
It explicitly constructs that "common sample space"
Where do they do that for quantum theory?
 
  • #312
Elias1960 said:
Just to add, because this combination has possibly caused some misunderstanding between us:

I'm a mathematician by education and the first claim I interpret as that such a single sample case cannot exist. Not as what is claimed in the second part, namely you can have it, but in this case, you have to make the sample space a little bit larger.
A bit larger? For no longer are the outcomes labelled by ##E## some POVM element, but by ##E, M## with ##M## any POVM containing ##E##. Thus the sample space is infinitely larger.

To some degree what Hidden variable theories are doing is like generalizations of Nash's embedding theorem. There we find that any 4D Lorentzian manifold can be represented as a subspace of a Minkowski space of some dimension 231. This whole line of discussion would be like trying to deny pseudo-Riemannian geometry as a field or that General Relativity has curved spacetimes because Nash's embedding theorem holds.

In fact it is even worse since it would be as if we found the Minkowski space had to be infinite dimensional to embed a generic 4D curved spacetime. I think it is worth thinking about this analogy as it is exactly what hidden variable theories do mathematically. We restore classical physics at the cost of an infinite number of contextual degrees of freedom.

Again my original comments with @vanhees71 were about QM which does not have a single sample space. I think it should be possible to make this statement without having to talk about embeddings in unverified infinite dimensional hidden variable theories. Just like I could say something is not true in General Relativity and verified experimentally without discussing a possible embedding in a 231 dimensional Minkowski background.
 
Last edited:
  • #313
DarMM said:
There literally is. Debating this would be like us debating if differential geometry exists as a field. See Summer and Redei's classic introductory paper on the topic:
https://arxiv.org/abs/quant-ph/0601158
So, the abstract already contains appropriate (non-misleading) terms for this, "noncommutative measure theory" and "von Neumann algebras". What I criticize is not that mathematical structures which do not fulfill all axioms of probability theory are studied by those interested in such abstract mathematics, but that it is claimed that quantum theory somehow requires such a generalization of probability theory.
DarMM said:
Where do they do that for quantum theory?
The reference, again,
Kochen, S., Specker, E.P. (1967). The Problem of Hidden Variables in Quantum Mechanics, J. Math. Mech. 17(1), 59-87. They do it on page 63, as already mentioned.
 
  • #314
DarMM said:
But it simply doesn't as a mathematical fact. An ensemble is an approximate realisation of a sample space. In quantum probability the state alone does not give one a well defined sample space due to Kochen-Specker contextuality. That's all there is to it.

What would help me is understanding what you think the Kochen-Specker theorem implies. Then we could more easily discuss this since currently what you are saying seems in direct contradiction to it.
The Kochen-Specker theorem formally demonstrates that QT is not consistent with the assumption that all observable are determined and makes this statement quantitatively testable by experiment, which confirm it.

I don't see, why a preparation procedure doesn't define an ensemble, because an ensemble does not depend on the assumption that all observables take determined value. It's just a collection of objects, where some properties are determined to a certain degree, enabling one to measure observables (no matter whether they take determined values before the measurement or not), leading to random results which can be statistically analyzed to test the probabilistic predictions of QT. That's why I don't understand the effect of the KS theorem on the notion of "ensemble".
 
  • #315
DarMM said:
Quantum Theory doesn't have a single sample space, that immediately blocks preparations being ensembles. Unless one wants to move beyond the theory as it is that's the end of the story.

However even if you try to have one sample space, such as in a hidden variable theory, the KS theorem tells you it has to include the device as a label on the outcomes. Thus the preparation is for system-device pairs and so the device has to be mentioned.
I don't need a single sample space to define an ensemble. Of course you can only prepare ensembles with properties that make sense within QT. If you could experimentally demsonstrate that you are able to prepare an ensemble violating the restrictions of QT (e.g., the Heisenberg uncertainty principle for position and momentum) that would be a disproof of QT, but no such case is known today.

Of course to define the "random experiment" in the sense of probability theory completely you need to completely specify the measurement setup, i.e., whether you measure the spin-x component or the spin-y component in a Stern-Gerlach experiment. Nevertheless, the silver atoms leaving the oven define an ensemble with more or less sufficiently determined properties (distribution of the momenta in beam direction determined by the temperature of the Ag vapor in the oven and more or less defined position in transverse direction determined by the aperture of the slits in front of the oven's opening) due to this specific preparation procedure. Independent from this one can position the magnetic field in any direction one likes, measuring the so defined component of the magnetic moment in this direction.
 
  • #316
Elias1960 said:
So, the abstract already contains appropriate (non-misleading) terms for this, "noncommutative measure theory" and "von Neumann algebras". What I criticize is not that mathematical structures which do not fulfill all axioms of probability theory are studied by those interested in such abstract mathematics, but that it is claimed that quantum theory somehow requires such a generalization of probability theory
Well it does. The title of the paper is "Quantum Probability Theory", the field is called quantum probability theory. The fact that this field uses noncommutative measure theory and von Neumann algebras is exactly the reflection of the fact that it is a generalization of probability theory which uses commutative measure theory and commutative von Neumann algebras.
i.e. the structures in quantum theory are generalizations of those in probability theory. It contains generalizations of results from probability theory (e.g. de Finetti's theorem) and so on. It is a generalization of probability theory.

Elias1960 said:
The reference, again,
Kochen, S., Specker, E.P. (1967). The Problem of Hidden Variables in Quantum Mechanics, J. Math. Mech. 17(1), 59-87. They do it on page 63, as already mentioned.
This is just the construction generalized in the more modern ontological models framework which we know must be infinite dimensional as I mentioned above.

Again QM does not possesses a single sample space despite the fact that it can be embedded in an infinite dimensional sample space. Just as general solutions in General Relativity are not flat despite the fact that they can be embedded in a 231 dimensional Minkowski space.

Nobody would object to "Schwarszchild spacetime is curved" with "but it can be embedded in a 231 dimensional Minkowski spacetime!"
 
  • Like
Likes mattt, dextercioby and Auto-Didact
  • #317
vanhees71 said:
I don't see, why a preparation procedure doesn't define an ensemble, because an ensemble does not depend on the assumption that all observables take determined value
Correct, but that's not the issue. The variables could all have indeterminate values, such as in a classical stochastic theory, but yet a preparation would still be an ensemble since there is one sample space.

QM doesn't have a single sample space, thus the preparations are not ensembles. It's that simple.
 
  • #318
Then, please, define what you mean by "sample space". A preparation procedure of a Ag-atom beam in the orignal SG experiment defines Ag atoms with properties specific enough to be interpretable within modern QT, and you can understand the outcome of measurements with independently chosen spin components to be measured. So why these Ag atoms, in your opinion, do not define an "ensemble", before also the to-be-measured observable is chosen? If this is the case, how can then quantum states have a well-defined operational meaning in the lab? Specifically, how then is it possible to describe all kinds of "delayed-choice experiments" successfully with QT?
 
  • #319
vanhees71 said:
Then, please, define what you mean by "sample space"
The standard definition from probability theory.

vanhees71 said:
If this is the case, how can then quantum states have a well-defined operational meaning in the lab? Specifically, how then is it possible to describe all kinds of "delayed-choice experiments" successfully with QT?
There's no real contradiction. The state doesn't define an ensemble doesn't mean the state has no meaning or that delayed-choice experiments cannot be described.
 
  • #320
Ok, here's the definition of Wikipedia:

In probability theory, the sample space (also called sample description space[1] or possibility space[2]) of an experiment or random trial is the set of all possible outcomes or results of that experiment.[3] A sample space is usually denoted using set notation, and the possible ordered outcomes are listed as elements in the set. It is common to refer to a sample space by the labels S, Ω, or U (for "universal set").

That's how I understood it too. But as I said, the quantum state refers to preparation procedures not the a "sample space" in this sense. The random experiment "measurement of some set of compatible observables" is of course only specified when this set of observables is chosen, but this choice is independent of the preparation procedure, and this is very important for the description of real-world experiments.
 
  • #321
vanhees71 said:
But as I said, the quantum state refers to preparation procedures not the a "sample space" in this sense
Precisely, but a preparation procedure in classical mechanics is identical with it.

Streater says the following:
The only difference is that, in quantum probability, there is more than one complete commuting set, and each gives a different sample space and probability: the statistical model is contextual
Also when discussing the EPR paper, the error is:
That is, they were arguing as if there were a sample space for the system

Perhaps you just mean ensemble in a looser sense like "a pile of stuff" rather than the formal sense used in statistics and probability theory?
 
  • Like
Likes kith and vanhees71
  • #322
The simplest way of phrasing it perhaps is that in a classical stochastic theory the outcomes, even though they are random, occur independently of the device. That is each observable attains a value (even if randomly driven) regardless of whether one measures or not. In quantum theory only the observable you actually measure has a value or outcome.

Thus the preparation does not prepare a bunch of systems which sample the outcome space in a manner that approximately replicates the relevant probability distribution since there are no outcomes without a device. To make the state into an ensemble we must specify the device which will define the outcomes and then the preparation can be considered to constitute an ensemble.
 
  • Like
Likes mattt, Jimster41, kith and 1 other person
  • #323
Yes, sure. That's what's proven by all the Bell tests. I can live with the refinement to not call a state as defining an ensemble.
 
  • Like
Likes Auto-Didact and DarMM
  • #324
DarMM said:
Well it does. The title of the paper is "Quantum Probability Theory", the field is called quantum probability theory. The fact that this field uses noncommutative measure theory and von Neumann algebras is exactly the reflection of the fact that it is a generalization of probability theory which uses commutative measure theory and commutative von Neumann algebras.
It contains generalizations of results from probability theory (e.g. de Finetti's theorem) and so on. It is a generalization of probability theory.
This is as reasonable as to name, say, complex numbers "generalized probabilities". It does not generalize them at all except in a purely mathematical sense but describes very different things.

That some mathematical ideas for proofs may be taken over is an irrelevant mathematical accident, similar to that one can also add complex numbers.

No doubt that from a mathematical point of view it is a generalization of probability theory. This generalization is, nonetheless, uninteresting for anything related to probabilities in the real world.
DarMM said:
This is just the construction generalized in the more modern ontological models framework which we know must be infinite dimensional as I mentioned above.
Whatever, it exists and is constructed in a quite trivial way. I have not claimed it has to be finite-dimensional.
DarMM said:
Again QM does not possesses a single sample space despite the fact that it can be embedded in an infinite dimensional sample space.
There is nothing in Kolmogorovian probability theory which requires that all probability distributions should be part of a given theory. So, once there exist an embedding of the QM probability distributions into the Kolmogorovian probability distribution defined on some single sample space, it is standard probability theory.
DarMM said:
Just as general solutions in General Relativity are not flat despite the fact that they can be embedded in a 231 dimensional Minkowski space.
Nice try but not appropriate. There is no essential property of probability distributions which is lost or gained by an embedding. If the states we can prepare are only those of a subset of quantum equilibrium states, this is not fine but changes nothing in the rules of Kolmogorovian probability theory.
 
  • #325
Elias1960 said:
Nice try but not appropriate. There is no essential property of probability distributions which is lost or gained by an embedding. If the states we can prepare are only those of a subset of quantum equilibrium states, this is not fine but changes nothing in the rules of Kolmogorovian probability theory.
Elias1960 said:
Whatever, it exists and is constructed in a quite trivial way. I have not claimed it has to be finite-dimensional.
There's no essential properties of Schwarzschild space gained or lost by its embedding either.
Quantum Theory itself does not have a single sample space. That is a fact.

That it can be embedded in an infinite dimensional sample space containing variables nobody has ever witnessed negates this as much as the fact that Schwarzschild spacetime can be embedded in a 231 dimensional Minkowski spacetime that nobody has evidence of.

And just like to replicate Schwarzschild spacetime you'd have to posit we're confined to a hypersurface of this massive Minkowski space, to replicate QM you have to assume we're confined epistimically in this infinite-dimensional sample space.

I don't really know what is the purpose in pointing this out.

Elias1960 said:
This is as reasonable as to name, say, complex numbers "generalized probabilities". It does not generalize them at all except in a purely mathematical sense but describes very different things.

That some mathematical ideas for proofs may be taken over is an irrelevant mathematical accident, similar to that one can also add complex numbers.

No doubt that from a mathematical point of view it is a generalization of probability theory. This generalization is, nonetheless, uninteresting for anything related to probabilities in the real world.
Well we have every expert on the topic calling it quantum probability. With quotes like:
Streater Lost Causes p.38 said:
it is natural to interpret quantum mechanics as a generalization of classical probability
Stephen Summers in Quantum Probability Theory said:
This should help the reader in Section 4 to recognize more readily the probability theory inherent in the theory of normal states on von Neumann algebras, which is the setting of noncommutative probability theory. Classical probability theory finds its place there in as the special case where the von Neumann algebra is abelian. Nonrelativistic quantum mechanics is then understood in Section 5 as the special case where the von Neumann algebra is a nonabelian type I algebra.
Scott Aaronson in Quantum Computing since Democritus said:
Quantum mechanics is a beautiful generalization of the laws of probability
Somehow they are all wrong though and you are right for reasons you cannot articulate. In the spirit of probability theory I will let others assign their own priors to this belief as I don't have the energy to debate a field being correctly named and classified by its own experts.
 
Last edited:
  • Like
Likes mattt, weirdoguy and Auto-Didact
  • #326
Elias1960 said:
No doubt that from a mathematical point of view it is a generalization of probability theory. This generalization is, nonetheless, uninteresting for anything related to probabilities in the real world.
Surely, you must be joking? There are tonnes of generalizations of the concept of chance which have mathematical formulations and/or applications and yet cannot be treated, not even in principle, as a form of Kolmogorovian probability theory (KPT).

In fact, it is both an unjustified reductionism to treat chance as probability as well as a frequently made category error to treat the concept of probability as if it were de facto described by KPT. For example, QT itself is a theory which has concerned itself with negative probabilities: this already directly violates Kolmogorov's axioms.

The fact that PT has been axiomatized, while impressive, is a grossly exaggerated achievement. In actuality, KPT is only a theory of probability which is generally prematurely seen as the theory of probability, in much the same way that Newtonian mechanics is a theory of mechanics which was erroneously seen as the theory of mechanics.

In other words, the assumed uniqueness of KPT to be capable of describing the concept of chance is not merely unjustified, but unjustifiable because it has actually been disproven mathematically by the discovery or invention of alternate mathematical frameworks which specifically subsume KPT as a certain idealized limiting case.
 
  • Like
Likes mattt, vanhees71 and weirdoguy
  • #327
Auto-Didact said:
Surely, you must be joking? There are tonnes of generalizations of the concept of chance which have mathematical formulations and/or applications and yet cannot be treated, not even in principle, as a form of Kolmogorovian probability theory (KPT).
That there are tons of "generalizations" in the mathematical sense is a triviality. Remove whatever axiom you do not like most, and you have a generalization in the mathematical sense.
Auto-Didact said:
In fact, it is both an unjustified reductionism to treat chance as probability as well as a frequently made category error to treat the concept of probability as if it were de facto described by KPT. For example, QT itself is a theory which has concerned itself with negative probabilities: this already directly violates Kolmogorov's axioms.
Nice example - but it only shows that the interpretations which treat those negative things as probabilities are nonsense.
Auto-Didact said:
The fact that PT has been axiomatized, while impressive, is a grossly exaggerated achievement. In actuality, KPT is only a theory of probability which is generally prematurely seen as the theory of probability, in much the same way that Newtonian mechanics is a theory of mechanics which was erroneously seen as the theory of mechanics.
The point is not that it has been axiomatized. The point is the particular axiomatization given by Cox and Jaynes of the logic of plausible reasoning. To generalize it means, essentially, to accept forms of plausible reasoning so that using different ways to argue would lead to different results, in other words, it would allow inconsistent reasoning.
Auto-Didact said:
In other words, the assumed uniqueness of KPT to be capable of describing the concept of chance is not merely unjustified, but unjustifiable because it has actually been disproven mathematically by the discovery or invention of alternate mathematical frameworks which specifically subsume KPT as a certain idealized limiting case.
I do not care about a "concept of chance", but about the rules of consistent plausible reasoning.
 
  • Skeptical
Likes weirdoguy
  • #328
DarMM said:
There's no essential properties of Schwarzschild space gained or lost by its embedding either.
There are. Everything which mentions curvature is, because curvature is different, is even a completely different mathematical object, on the Schwarzschild space and on the higher dimensional space where you have embedded it.
DarMM said:
Quantum Theory itself does not have a single sample space. That is a fact.
Ok, if you repeat falsehoods even after you have been confronted with an explicit (and simple) construction of a counterexample, I cannot do anything about it. Feel free to continue to believe this. I give up.
DarMM said:
And just like to replicate Schwarzschild spacetime you'd have to posit we're confined to a hypersurface of this massive Minkowski space, to replicate QM you have to assume we're confined epistimically in this infinite-dimensional sample space.
A subset of probability distributions over a given sample space remains to be a set of probability distributions over this sample space.
DarMM said:
Well we have every expert on the topic calling it quantum probability. ...
Somehow they are all wrong though and you are right for reasons you cannot articulate.
One can interpret those quotes as referring to the purely mathematical "generalizations", which one can simply obtain by taking away some axioms.

But these mathematical generalizations do not define a reasonable set of rules of plausible reasoning, in the same way as lattice theory, which has been named "quantum logic", does not define a reasonable replacement of the rules of logic.

As long as you simply take away some axioms, you simply reduce your ability to derive something. If you add, instead, some modification of the axiom, you will end in inconsistent nonsense. Not because the new set of axioms has internal contradictions (the abstract set of axioms may have some nontrivial models) but because these new axioms are not laws of reasoning.

I have articulated the reasons, in particular by the reference to the explicit construction of that sample space which you claim does not exist.

But, aside, Streater is indeed a lost cause. To show this, it is sufficient to quote the begin of his section about Bohmian mechanics:
This subject was assessed by the NSF of the USA as follows [Cushing, J. T.,
review of [28]] “. . . The causal interpretation [of Bohm] is inconsistent with
experiments which test Bell’s inequalities. Consequently . . . funding . . . a re-
search programme in this area would be unwise”. I agree with this recommen-
dation.
So, feel free to support any argument he gives yourself, but as some sort of reference to scientific authority, he is completely worthless.
 
  • Skeptical
Likes weirdoguy
  • #329
Elias1960 said:
Ok, if you repeat falsehoods even after you have been confronted with an explicit (and simple) construction of a counterexample, I cannot do anything about it. Feel free to continue to believe this. I give up.
This will be my last post on this.

That's not a counter-example. You've shown that the objects in quantum theory can be embedded in an infinite dimensional object not in quantum theory.

A Gelfand homomorphism is a map that takes C*-algebra elements and maps them to functions over a manifold. This manifold is then the sample space.
Quantum theory's observable algebra lacks a Gelfand homomorphism that covers all of the algebra. Thus it does not have one sample space. The end.

What you are doing is finding an algebra with infinite degrees of freedom with the quantum algebra embedded as a subset. Note though it's not a subalgebra, the embedding destroys some algebraic properties. Then the fact that this much larger algebra, with observables never seen in a lab, has one sample space you are taking as implying QM has one sample space.

This simply doesn't make any sense. As I said it's like embedding every spacetime from General Relativity in 231-D Minkowski and declaring GR deals with flat spaces.

In fact it is worse, since the generalized Nash's embedding theorem tells us all properties of those manifolds are preserved, e.g. the curvature is retained as extrinsic curvature in the surrounding space. Where as the embedding destroys the algebraic relations in the quantum algebra.
 
  • Like
Likes mattt, Auto-Didact and Tendex
  • #330
I'd not say QT is a generalization of probability theory but it's an extension to provide a scheme to predict concrete probability measures for the outcome of measurements on physical systems. As I understand it from the many discussions in this forum, the most general mathematical scheme to do this is standard QT (with Born's rule as definition of the meaning of states, i.e., the statistical operator of the system) together with a POVM. A special case are the "complete measurements" a la von Neumann projector-valued measures (PVM). These schemes are applicable in practice for "small systems" like collisions of two particles producing a plethora of new particles at the LHC, quantum-optics experiments with a few photons and charged particles, few-body systems like atomic nuclei, atoms, molecules etc.

I don't think that this is sufficient though. Another very important ingredient in the realm many-body theory is the application of information theory, i.e., the maximum-entropy principle to QT, which provides another technique to postulate the (initial) statistical operators for a given situation in a sufficiently coarse-grained sense. Only with this quantum-statistical approaches you are able to close the gap between the microscopic description, which in practice is possible only for few-body systems, and the macroscopic matter, with which we deal in everyday life, including the measurement devices in the lab and which we describe by (semi-)classical physics.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
3K
Replies
58
Views
4K
  • · Replies 1 ·
Replies
1
Views
466
Replies
1
Views
1K
  • · Replies 69 ·
3
Replies
69
Views
7K
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 115 ·
4
Replies
115
Views
9K