Hydrodynamic interpretation and consistency classic probability

• I
• Killtech
In summary, the conversation discusses the compatibility of classical probability theory (PT) and quantum mechanics (QM). It is stated that there is no inherent reason for them to be incompatible and there are attempts to make them compatible, but they are not easily searchable. The time evolution of the system and how to extract probabilities from it are discussed, and it is mentioned that a stochastic process (SP) can be used to formulate Schrödinger's equation into the PT framework. However, it is noted that the wave function stores too much information and is not suitable for the classical PT state space. The conversation then shifts to discussing a hydrodynamic interpretation and the difficulty in finding material on this topic. The idea of using a minimal state space to keep
Killtech
TL;DR Summary
Looking for a interpretation that is consistent with classic probability
So back in the other thread I asked about compatibility of classical probability theory (PT) and QM – and it turns out there is no inherent reason why they need to be incompatible. Therefore I was looking for something that makes them compatible, which wasn’t easy to search for. But there seems to be a few – though not under that ‘search keyword’.

Anyhow, in terms of PT we have all we need: we have the time evolution of the system and how to extract probabilities from it. So starting at Schrödinger all we need to do is to formulate it into the PT framework by for example finding a stochastic process (SP) for that (I found many attempts in this direction but none doing the thing I would consider canonical). Looking at the time evolution of the probability density - Madelung equations in particular - it becomes directly clear that those are not linear. Therefore they are not suited to form the stochastic kernel (I.e. the PTs equivalent of a Hamilton operator) as its linearity is nonnegotiable. That simply means that the phase space of a point like particle is not viable for the classical PT state space ##\Omega##. The problem seems to be that the wave function just stores too much information (way more than the 6 DoF of a particle) that is crucial for the time evolution that cannot be reduced without breaking predictions (well, no surprise there, since that is the premise of quantum information I suppose).

But with so much irreducible degrees of freedom the smallest state space i could find to fit the information in was that of a field. That would allow to make the stochastic kernel linear but make the corresponding stochastic process mostly a deterministic one leaving mainly the initial distribution and measurements to the domain of stochastics. Anyhow this is the type of interpretations i am looking for.

Trying to give this field any physical interpretation then lead me straight to a hydrodynamic interpretation which I didn’t know before. Now why is this interpretation actually so less popular compared to the others that it’s quite hard find much about it?

I have given it a bit of thought and it appears to me that this yields a very intuitive interpretation for a lot of quantum behavior. But admittedly it comes with a long rattail of consequences from dragging classical physics into quantum territory which undoubtedly causes some conflicts. Interestingly however I haven’t found where this leads to any serious contradictions that cannot be resolved by accepting that Schrödinger is just a situational proxy for QED.

So obviously I want to read more into this and thus am looking for openly available material on this topic. In particular how the aspects of classical field interactions implied by classical physics are handled/interpreted vs how QM does it. But also why do we try to interpret QM with point particles if they don’t even carry nearly enough information to properly describe the time evolution of the system? Shouldn’t we be discourage enough from knowing how much trouble the very idea of a point like charge already causes in classical physics?

The problem with hydromechanical interpretations is that they become odd when appied to systems of particles since their wave functions depend on too many variables.

A. Neumaier said:
The problem with hydromechanical interpretations is that they become odd when appied to systems of particles since their wave functions depend on too many variables.
yeah, i stumbled on the same but thought there is a kind of canonical resolution to it. since you don't won't to model any non-physical artifacts you would look for a minimal state space that still keeps your stochastic kernel linear. this corresponds to the same thing you would want to do from a classical physics point of view: exploit all available symmetries of the multi particles distribution to reduce its function-domain as far as possible (as close to a single density + current field). Because in the end you don't want anything else that just two new field equations for charge and its current to complete Maxwell.

the thing is that two particle states don't store that much more information than a single on. i mean a ##C^\infty## field stores countable infinity of information (i.e. number of basis vectors) whereas two fields store exactly the same amount (##\mathbb {N}## is bijektive to ##\mathbb {N}^2##). this of course oversimplifies in terms of differentiability it but i hope you get what i mean.

so the simple question here is to check how much of the degrees of freedom in a two particle wave function is actually needed for correct predictions. but all symmetries we find that leave the measured distributions invariant means the wave function stores additional un-physical artifact data which we do not need. This leads to the obvious question what attempts were made in this regard?

Last edited:
Killtech said:
Anyhow, in terms of PT we have all we need: we have the time evolution of the system and how to extract probabilities from it. So starting at Schrödinger all we need to do is to formulate it into the PT framework by for example finding a stochastic process (SP) for that

I don't understand what "that" refers to. A stochastic process is technically an indexed collection of random variables. Describing certain types of stochastic processes (e.g. Markov) as physical phenomen requires defining what the "states" are. Is the quest to start with a set of variables and states that each have a particular physical interpretation and model their evolution as a stochastic process? - or is the quest to invent something which need not have an existing physical interpretation, give a model for its stochastic evolution, and find procedures for using it to compute all the probabilities that QM can?

If the latter case is the goal, then the thing-to-be-invented (like the wave function) is a complicated structure from the point of probability theory. It is common in physics to ignore the fundamentals of Probability and Statistics 101 by refusing to state the "probability space" under discussion. A thing that allows computing all probabilities computed by QM has the ability to define many related but different probability spaces. If such a thing were a probability distribution of something (i.e. in a particular probability space) then the conventional methods of using it to define other probability spaces would be things like conditional distributions, marginal distributions, and distributions of functions of the random variables in the original probability space.

Stephen Tashi said:
I don't understand what "that" refers to. A stochastic process is technically an indexed collection of random variables. Describing certain types of stochastic processes (e.g. Markov) as physical phenomen requires defining what the "states" are. Is the quest to start with a set of variables and states that each have a particular physical interpretation and model their evolution as a stochastic process? - or is the quest to invent something which need not have an existing physical interpretation, give a model for its stochastic evolution, and find procedures for using it to compute all the probabilities that QM can?
A process ##S_t## such that ##P(X(S_t)=x) = |\psi (x,t)|^2##, same for ##p## and all potential other observable. I also want to assume ##P(S_t=s)=P(S_t=s|\text{X measured at (x,t)})## for all ##t<t_\text{X measured}## - that is the state should not know what of it might be measured (exuding cases where there is an obvious prior interaction with the measurement device). My primary goal here was to establish the class of processes that can meet this criteria just to know what compatibility with classic probabilities entails - therefore that's your latter option. Once that is done i would start looking for interpretations.

As for using the entire Hilbert-Space (well, the sphere subset of normalized states) as the probability state space - well, this is part of the questions i have about QM. How much of it do i really need? For a one particle system i think it's hardly reducible but for two or more particles i can't figure that out myself. Anyhow i am currently looking into quantum information if that can help me here since it's kind of related. The 'minimal' possible state space should directly correspond to the information stored is a quantum state.

The question of interpretation comes last - i.e. looking for a nice transformation of the state information into something more intuitive. And since it looks that i need to model an infinity of information per state, the first thing that came into my mind is that a simple classical field just matches that (in size). sure i could take the wave function itself but that doesn't help interpretation. so the hydrodynamic description which rewrites ##\psi## in terms of a density and its corresponding current field is seemingly an obvious choice - and hence why i wanted to know more about it. and if it's not than what are the arguments against it?

Killtech said:
So starting at Schrödinger all we need to do is to formulate it into the PT framework by for example finding a stochastic process (SP) for that (I found many attempts in this direction but none doing the thing I would consider canonical). Looking at the time evolution of the probability density - Madelung equations in particular - it becomes directly clear that those are not linear. Therefore they are not suited to form the stochastic kernel (I.e. the PTs equivalent of a Hamilton operator) as its linearity is nonnegotiable. That simply means that the phase space of a point like particle is not viable for the classical PT state space ##\Omega##. The problem seems to be that the wave function just stores too much information (way more than the 6 DoF of a particle) that is crucial for the time evolution that cannot be reduced without breaking predictions (well, no surprise there, since that is the premise of quantum information I suppose).
The interpretations I know which define a stochastic process following classical probability theory, like Nelsonian stochastics, don't use the phase space as the base, but only the configuration space. Everything else would look quite problematic given the Kochen-Specker theorem. The straightforward way to handle Kochen-Specker is to use the configuration space as preferred, and really describing the state of the system, while everything else is contextual.

The quite recent (2011) variant of "entropic dynamics" proposed by Caticha follows completely the schemes of objective Bayesian probability theory. See Caticha, A. (2011). Entropic Dynamics, Time and Quantum Theory, J. Phys. A 44 , 225303, arxiv:1005.2357. Everything is quite nice there, a probability density on the configuration space, together with the entropy of all other variables.

I would also prefer a field ontology in comparison with particle ontology. But, to be clear, it is not the wave function on the configuration space which is the "field" of a field ontology, but the fields used in QFT, which are defined on the usual three-dimensional space. For dBB, the formulas for scalar field theory can be found in Bohm.D., Hiley, B.J., Kaloyerou, P.N. (1987). An ontological basis for the quantum theory, Phys. Reports 144(6), 321-375

Elias1960 said:
The interpretations I know which define a stochastic process following classical probability theory, like Nelsonian stochastics, don't use the phase space as the base, but only the configuration space. Everything else would look quite problematic given the Kochen-Specker theorem. The straightforward way to handle Kochen-Specker is to use the configuration space as preferred, and really describing the state of the system, while everything else is contextual.
Kochen-Specker theorem only applies to states described in terms of observables. Probability theory itself doesn't have a concept of observables thus it is an aspect you have to add to your specific model yourself. As such Kochen-Specker is no restriction for the state space unless you willingly limit yourself to very special type of information.

Besides, note that by axioms of QM observables have to be linear operators only. the linearity however is a huge restriction of which Kochen-Specker is one consequence (Kochen Specker explicitly use this axiom). While it makes sense from a technical point of view to define observables in this fashion i find it hard to find any clear experimental indication for it. I mean i can't imagine how you would even try experimentally prove that non-linear observables don't exist. On the other hand if one finds any process in QM for which time evolution isn't exactly linear it would put this postulate into scrutiny.

Therefore it is a lot more useful to base the state space around the underlying information you want to model regardless if that information might not be directly experimentally observable. Since i really want to understand what information that is it makes a lot more sense for me to take this route. Information that is irreducible is real enough for me and certainly more so then an observable which is not native to the actual thing i want to describe.

Elias1960 said:
I would also prefer a field ontology in comparison with particle ontology. But, to be clear, it is not the wave function on the configuration space which is the "field" of a field ontology, but the fields used in QFT, which are defined on the usual three-dimensional space. For dBB, the formulas for scalar field theory can be found in Bohm.D., Hiley, B.J., Kaloyerou, P.N. (1987). An ontological basis for the quantum theory, Phys. Reports 144(6), 321-375
Yeah, i also found a particle ontology to be more trouble than help so far. Sure for more general system we need the quantum field framework, but for the start i want to understand just something as simple as the single particle system within a PT consistent way. And as a field/function just happens to be more akin to the information contained in the state then anything else it is fair to attempt to picture it like this and overall as a simplistic interpretation it works shockingly well. No other interpretation i know so far has remotely been able to present for example the QM behavior of a H-atom that intuitively as this one, especially because it even makes total sense form a classical point of view - ironically by the very argument used to make Bohr-Rutherford model fail classically. So why dismiss it?

Last edited:

1. What is hydrodynamic interpretation in classic probability?

Hydrodynamic interpretation in classic probability is a mathematical approach that uses fluid dynamics principles to model and analyze random events. It involves treating probabilities as fluid flows and using equations and models from fluid dynamics to study and predict the behavior of these probabilities.

2. How is hydrodynamic interpretation used in probability theory?

Hydrodynamic interpretation is used in probability theory to provide a physical and intuitive understanding of random events and their probabilities. It can help in visualizing and analyzing complex probability problems, and can also be used to develop new models and theories in probability and statistics.

3. What is the consistency principle in classic probability?

The consistency principle in classic probability states that as the number of trials or observations increases, the observed frequency of an event will approach its theoretical probability. This principle is essential in validating and verifying the accuracy of probability models and theories.

4. How is the consistency principle related to hydrodynamic interpretation?

Hydrodynamic interpretation provides a physical basis for the consistency principle in classic probability. Just as fluids behave consistently and follow certain patterns and laws, probabilities also follow consistent patterns and laws as the number of trials increases. Hydrodynamic interpretation helps in understanding and explaining this consistency in probabilities.

5. What are some real-life applications of hydrodynamic interpretation in classic probability?

Hydrodynamic interpretation has been applied in various fields, such as finance, physics, and biology, to model and analyze random events. It has been used to study stock market fluctuations, fluid flow in porous media, and the spread of diseases in a population. It can also be used in risk assessment and decision-making processes.

• Quantum Interpretations and Foundations
Replies
42
Views
5K
• Quantum Interpretations and Foundations
Replies
21
Views
2K
• Quantum Interpretations and Foundations
Replies
10
Views
1K
• Quantum Interpretations and Foundations
Replies
1
Views
298
• Quantum Interpretations and Foundations
Replies
41
Views
3K
• Quantum Interpretations and Foundations
Replies
2
Views
828
• Quantum Interpretations and Foundations
Replies
120
Views
10K
• Quantum Interpretations and Foundations
Replies
1
Views
605
• Quantum Interpretations and Foundations
Replies
41
Views
4K
• Quantum Interpretations and Foundations
Replies
19
Views
749