I The interpretation of probability

  • #31
gentzen said:
Those reproduce the quantum probabilities by the usual interpretation of De-Broglie Bohm
Sorry thinking about this again, how is this done? The underlying equations are deterministic so how do you ensure you get 1-random frequencies like QM.
gentzen said:
The goal of Killtech is not to reduce quantum probabilities to classical physics
Genuinely I understand this isn't about classical physics. I'm solely talking about how quantum probability cannot be replicated by classical probability.
I fully get that that paper is defining a classical probability measure over pure states, but I don't see how that recovers quantum probabilities without simply interpreting pure states as probabilistic constructs already as opposed to "real waves". Once again I understand the distinction between classical probabilities and classical physics.
 
Last edited:
Physics news on Phys.org
  • #32
Son Goku said:
Sorry thinking about this again, how is this done? The underlying equations are deterministic so how do you ensure you get 1-random frequencies like QM.
Good question, even so maybe this is not the best thread for asking it, because the resident Bohmian experts will probably not answer it here. The probability measure on the configuration space of particle positions, and drawing a configuration of particle positions based on that measure is not the problem. That part is just assumed to be given.

One problem is how to get randomness from one particular configuration of particle positions drawn according to that measure. This is achieved by the notion of typicality. This means that the particular drawn configuration of particle positions has an overwhelmingly huge probability of being typical, at least if the configuration space is big enough. How big is big enough? Not sure, but at least ##10^{23}## is definitively big enough.

The other problem is how to get actual measurement results from particle positions that are in a certain sense not directly observable either. But at least they exist, according to the ontology of Bohmian mechanics. And so you end-up with a measurement theory which tries to explain this.
 
  • Like
Likes Son Goku
  • #33
Work & life is keeping me busy, so sorry i am unable to respond in a timely manner.

Son Goku said:
Genuinely I understand this isn't about classical physics. I'm solely talking about how quantum probability cannot be replicated by classical probability.
I have tried to find any indication that quantum probabilities are different from classical ones, but i couldn't find anything that would show that. All the no-go theorems are very interesting but all of them ultimately are focused on using some classical physical assumptions and showing exposing that it cannot work. Unfortunately they have to ebbet it into some probability model due to the nature of probabilistic results in real experiments.

Issue is that people seem to have quite a lot of misconceptions about classic probability - like for example where do you get the idea from that
Son Goku said:
Kolmogorov's axioms assume a single sample space, which is an assumption beyond the two you gave
Except from people here on the physics forums claiming that, i have not ever seen such an assumption in probability theory. Please look up the axioms, you won't find any mention of it. Kolmogorov definition outlines the minimum conditions to have a probability measure with its technical requirements (taken together they are define a probability space). But there is no limit on how many of those you many have (why would there be?). In fact every time you define a random variable, you define a new probability space on the possible values that variable can take (the state space of said variable) with its own measure and sigma algebra.

In case of CHSH we work with random variables, so we are just discussing such a situation. Maybe we should first try to find out where you got these strange idea from?

Son Goku said:
Kolmogorov's axioms involve more than those conditions, they explicitly define probability models as measures on the Sigma algebra of a single sample space.
Sure, Kolmogorov also needs set theory as a basis to define functions and sets, but Hilbert spaces need the same. I indeed didn't count the technical assumptions that QM also requires. As for the sigma algebras, you are aware that those are purely needed to ensure integrals are always well defined? It's particularly relevant for QM which Hilbert spaces is some form of ##L^2## space - A measurable space (named after Lebesque, a measure theory mathematician) that only works due to the choice Borel's sigma algebra. So QM has to make use of the very same technical framework to begin with - and quite a lot more. That may not be exclusively mentioned in physics lectures, but it doesn't change the facts.

Son Goku said:
Honestly I'm not confusing classical physics and classical probability. This construction seems parasitic on quantum probabilities for the pure case. How are the pure case probabilities replicated?
A pure state translates into a Dirac probability measure in probability theory - i.e. an (almost) certain state. So if you calculate ##\rho_\mu = \int \mu(d_\psi) |\psi\rangle \langle \psi|## with ##\mu## being a Dirac delta distribution you simply get the DO for that pure state. i.e. you get the exact same description you have in QM.

Hence this measure is really nothing else than a inconvenient way to denote DOs but in a classic probability formalism. And as @gentzen said, it is indeed:
gentzen said:
That construction sort of gives you the barycentric coordinates that Killtech was looking for
where each ##|\psi\rangle \langle \psi|## becomes a vertex of an infinitely dimensional simplex.

I knew this probability measure and one intention of this thread was to understand if this is the most "minimal" probability measure that works.
 
  • Like
Likes gentzen
  • #34
A. Neumaier said:
The vector space spanned by the density matrices in d dimensions is simply the ##d^2##-dimensional space of all Hermitian d by d matrices. For d=2 one can represent them as linear combinations of the identity and the Pauli matrices; the coefficients are the components of the Stokes vector. That's why the latter is relevant in optics. For ##d>2## a vector representation is much less useful than the matrix representation.

But all this is quite unrelated to the d-dimensional simplex in d+1 dimensional vector space, which characterizes classical probability.

Note that every finite-dimensional manifold may be regarded as part of an affine space, but the latter plays only rarely an important role in the analysis of the former.
Hmm, indeed you are right a basis of Hermitian matrices should do the trick. With them we can make use that an operation like ##S \rho S^\dagger## is linear in ##\rho##. For the qubit case ##d=2## we can write ##\rho## as a 4 dimensional vector of that basis and use ##\tilde S \rho## to express the same operation via a 4x4 matrix. Since the commutator is also linear in ##\rho## this also works with ##[H, \rho]## within that space which implies that there must exist a 4x4-matrix ##\tilde H## such that ##\rho(t) = e^{t \tilde H}\rho(0)##.

Now that formulation looks almost the same as a continuous time Markov chain with the exception that the DOs form a different subspace. So we have to ask what class of matrices ##\tilde S## maps the DO space onto itself? For a probability space those are the stochastic matrices, but here we need a slightly larger class.
What is it?

A. Neumaier said:
But all this is quite unrelated to the d-dimensional simplex in d+1 dimensional vector space, which characterizes classical probability.
Well, after this discussion i have come to the conclusion that the smallest simplex that can correctly encompass all pure states must indeed have infinite dimension. That is the probability space of the classic probability measure gentzen posted links to.

I never liked it because the infinite dimensionality makes it very tedious to work with. And on the other hand there is a lot of linearity in the Hilbert space that measure cannot make any use of. Given that we are dealing with such very special probability spaces i was trying to understand if its construction might not have been minimal. But now i think it is and i understand the challenge.

With something like the Hermitian basis however i come to realize that we can get around needing to use the continuous state space probability formalism. Because instead it seems we can use a helper measure that while not being a proper probability measure, it still fully determines the experimentally distinguishable part of the actual probability measure. That way a classic probabilistic description won't have to deal too much with Markov kernel formalism.
 
  • #35
Killtech said:
it seems we can use a helper measure that while not being a proper probability measure, it still fully determines the experimentally distinguishable part of the actual probability measure.
Quite likely what you are looking for is the Wigner quasi-probability representation...
 
Last edited:
  • Like
Likes Killtech and jbergman
  • #36
A. Neumaier said:
Quite likely what you are looking for is the Wigner peudo-probability representation...
Thanks! Took me a little reading because Wigners pseudo-probability is quite a bit more special. However the articles on quasiprobability distribution does explain the general framework which i am looking for.

That article claims that coherent states form a overcomplete basis (of the Hilbert space) and allow to diagonalize ever DO. But seeing that the quasiprobability distribution is formulated in terms of DOs like this ##\hat \rho = \int f(\alpha, \alpha^*)|\alpha\rangle \langle \alpha|d\alpha^2##, we can just take the ##|\alpha\rangle \langle \alpha|## as a basis for for DOs instead. Also note that this measure is very suspiciously close to the classic probability measure from @gentzen sources.

Going back to the simplex discussion, if we were to look what happens if we restrict the quasiprobabilities to actual probabilities, we see the Bloch sphere encompasses the resulting probability simplex in the following way:
picture2-1974813255.jpg

So we can directly see that for every state on the Bloch sphere there exist a rotation of the simplex vertices (i.e. DO basis) that the state will be within the simplex. This is interesting because we can make this work with time evolution: if our classic probability space instead uses the time dependent basis ##|\alpha(t)\rangle \langle \alpha(t)|##, then a DO that was initially on the simplex will at all times remain on the simplex within that moving basis. So for a given initial state we can have a low dimensional classical probability space that fully describes its behavior. But different initial states need different probability spaces.

However a good choice of the DO basis we could, we could try to make a rotating simplex that covers the whole of the Bloch sphere if not initially, then at least over time. This way, we can evolve any initial state to the point it gets into the simplex i.e. we model the system as having an unknown time shift.

That may establish a connection between the classical full p.measure ##\rho_\mu = \int \mu(d\psi)|\psi\rangle\langle\psi|## by simply averaging the integral over subspaces of states that are time shifted version of each other.

I have to find more to read on this topic and contemplate on this.
 
  • #37
Killtech said:
as a basis for for DOs instead.
Except that it is not a basis in the mathematical sense, but an overcomplete set.

Killtech said:
So for a given initial state we can have a low dimensional classical probability space that fully describes its behavior.
No. You also need the time dependence of the rotation to describe the behavior. Moreover, you introduce a lot of representation ambiguity since one can start with any simplex whose vertices are on the Bloch sphere.
 
  • #38
A. Neumaier said:
Except that it is not a basis in the mathematical sense, but an overcomplete set.
overcomplete in the Hilbert space of quantum states, yes. The DOs formed from those states by ##|\psi\rangle\langle\psi|## however are in a different space which has a has a higher dimension, thus a true basis of the Hilberts space basis just isn't enough to cover it all. The fact that those states are enough to diagonalize any DO is probably enough for them (written as DOs) to be a complete basis of the space they are embedded in, but i am not sure and have to check that.

A. Neumaier said:
No. You also need the time dependence of the rotation to describe the behavior. Moreover, you introduce a lot of representation ambiguity since one can start with any simplex whose vertices are on the Bloch sphere.
Mixed states will always come with an ambiguity no matter what, so it's just a question where you want to put it. Hence its perfectly fine for the initial rotation matrix to be ambiguous - specifically if all ambiguity can be isolated into it.

My thinking was that the time dependence of the matrix can be removed by instead of choosing a static probability space using the space of a time dependent process for which there is no problem of representing a rotating simplex implicitly: For each ##t## we have an own probability space anyway for which we may chose its very own basis - so we may always pick the slightly time evolved DOs from the basis of the space before. Practically the time dependent rotation becomes incorporated into the process space via a choice of time dependent basis.

Unfortunately, it would mean that any random variables acting on that probability space would have to unwind that rotation themselves, so practically such a choice means we are in a Heisenberg like picture. Yeah, that might not be an ideal choice but again, i have to think this through.
 
  • #39
Killtech said:
overcomplete in the Hilbert space of quantum states, yes. The DOs formed from those states by ##|\psi\rangle\langle\psi|## however are in a different space which has a has a higher dimension, thus a true basis of the Hilberts space basis just isn't enough to cover it all.
This doesn't help. The dimension of the space of Hermitian matrices is the square of the dimension of the space of wave functions. Thus a basis has only finitely many terms, while the set of coherent states is infinite-dimensional.

Killtech said:
Mixed states will always come with an ambiguity
No. Representing a mixed state by its density matrix is not ambiguos at all.

Killtech said:
My thinking was that the time dependence of the matrix can be removed by instead of choosing a static probability space using the space of a time dependent process for which there is no problem of representing a rotating simplex implicitly: For each ##t## we have an own probability space anyway for which we may chose its very own basis
then everything depends on a time-dependent coordinate system, which is not better.
Think of representing classical motions that way...
 
Last edited:
  • Like
Likes gentzen
  • #40
A. Neumaier said:
then everything depends on a time-dependent coordinate system, which is not better.
Think of representing classical motions that way...
I don't know. Actually now that you mention it, it reminds me quite a bit on Hamilton-Jacobi equation in classical mechanics where any complex problems is reduced to the triviality of a simple constant abstract rotation in some crazy complicated representation. I think HJE isn't employed often mostly due to the impossibility of finding a proper transformation - but here for the finite dimensional QM situation it actually looks very reliably doable. But i haven't time to think this all through yet and maybe the resemblance is misleading me and this isn't comparable.

Then again, i actually never though of that until you made me. Indeed the quantities related to the constants of motion in HJE like the periods of cyclic coordinates translate directly into the eigenvalues of the corresponding Markov process (its transition/rate matrix/kernel) which represents the same (deterministic) dynamics but for any random initial distribution. Thanks for bringing this up.

A. Neumaier said:
No. Representing a mixed state by its density matrix is not ambiguos at all.
Hmm, in a sense you are right, but this isn't what i had in mind.

The ambiguity in this approach is not simply about representation, since apparently the valid representations are additionally constrained in a complicated manner. One could bring that up as an argument against such an idea. But I am trying to understand if this may have something to do with further hidden physical information inherent in the system instead.

If we only have to deal with such systems purely in the isolated case, i would agree with you. But if you consider what happens if we couple it to another quantum system, things get interesting: The dimension of the DO space grows faster then one would normally expect by looking at the DO spaces of the individual systems. Higher dimension means the system stores more information. Classically one would interpret this as the complex interaction uncovering additional information from the individual systems that is physically inaccessible/not differentiated while each is in isolation.

So upon combining two quantum systems with each other knowing the density matrices each system was in exactly, i think it is insufficient to uniquely determine the density matrix of the combined system, is it? The individual DOs seem to lack some information. For the initial state, that may not make any difference if measured instantly, but once the system was evolved by a Hamiltonian producing some entangling between the coupled systems, mixed states will show more diverse behavior making use of the additional dimensions. Interestingly, the classical measure with its massive ambiguity in the isolated case never runs into this problem and uniquely determines a mixed state regardless of what other systems it is coupled to.
 
  • #41
Killtech said:
If we only have to deal with such systems purely in the isolated case, i would agree with you. But if you consider what happens if we couple it to another quantum system, things get interesting: The dimension of the DO space grows faster then one would normally expect by looking at the DO spaces of the individual systems. Higher dimension means the system stores more information.
The dimension is squared. Density matrices are routinely used for open systems (Lindblad equations and generalizations). They indeed contain much more information than the pure case, which is completely useless for open systems because it cannot represent any realistic system (except at very low temperature).
Killtech said:
So upon combining two quantum systems with each other knowing the density matrices each system was in exactly, i think it is insufficient to uniquely determine the density matrix of the combined system, is it?
Of course, since the information about correlations is missing.

But since classical random systems already have the same sort of incompleteness, your demand is unreasonable.
 
  • Like
Likes gentzen
  • #42
A. Neumaier said:
The dimension is squared. Density matrices are routinely used for open systems (Lindblad equations and generalizations). They indeed contain much more information than the pure case, which is completely useless for open systems because it cannot represent any realistic system (except at very low temperature).
Is the dimension squared though? i forgot to account for the monogamy of entanglement and while there are sure many Hermitian matrices, shouldn't the monogamy butcher their dimension in order to produce valid DOs?

A. Neumaier said:
Of course, since the information about correlations is missing.
Okay, sorry i forgot to mention i was thinking of a scenario where two system start independently in an isolated state each and are then brought into interaction. In that case there should be no kind of entanglement initially.

In terms of pure states the standard procedure/assumption is to build the initial states of the combined system as products of the individual pure states of the subsystems. This construction uniquely determines the state of the new system hence there is no incompleteness here. But i am not sure how this works for mixed states and their DOs. Maybe i was mistaken and it works similar after all? Is the combined DO given by the Kronecker product of the individual DOs? At least in terms of size this would fit.

A. Neumaier said:
But since classical random systems already have the same sort of incompleteness, your demand is unreasonable.
Actually, no. At very least i don't know what you are referring to. All the probability models i know always are complete in the set of quantities they model. An incompleteness implies the indecision of some statements and for example it would mean one could not determine if a a specific expectation or correlation would have a value or if they are conditioning on an impossibility and hence are not even defined. Not being able to differentiate between two such possible extremes would produce a model that is mathematically challenging but has little practical purpose. Given the history of the axiom of choice it should be clear how much complexity this entails, which is why models are usually setup so they don't run into such issues.
 
  • #43
Killtech said:
Is the dimension squared though?
The space of Hermitian operators on a Hilbert space of complex dimension ##d## has real dimension ##d^2##. The semidefinite ones form a full-dimensional cone, Normalization intersects this cone with an affine hyperplane, hence produces a ##d^2-1## dimensional compact manifold. All these are physically resomable density operators.
Killtech said:
i was thinking of a scenario where two system start independently in an isolated state each and are then brought into interaction. In that case there should be no kind of entanglement initially.
In this case the starting density opertor is the tensor product of the pieces. But interaction adds immediately terms that destroy the tensor product structure, Thus after the first picosecond you are on the full-dimensional manifold of density operators.
Killtech said:
Is the combined DO given by the Kronecker product of the individual DOs? At least in terms of size this would fit.
Yes, Kronecker product = tensor product.

Killtech said:
Given the history of the axiom of choice
The axiom of choice is irrelevant in all that.
 

Similar threads

Replies
16
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
Replies
5
Views
2K
  • · Replies 309 ·
11
Replies
309
Views
15K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 21 ·
Replies
21
Views
4K
  • · Replies 59 ·
2
Replies
59
Views
12K
Replies
47
Views
5K
  • · Replies 3 ·
Replies
3
Views
3K
Replies
35
Views
736