Probabilism and determinism

  • #1
entropy1
Gold Member
916
53

Main Question or Discussion Point

If we use a probabilistic model of QM, is there still room for determinism? If we don't have knowledge of the exact outcomes, can there still be underlying determinism in such a model?

I am aware there are probabilistic and deterministic models available for QM. Does that mean QM could be intrinsicly deterministic? Or is it both probabilistic and deterministic?
 
Last edited:

Answers and Replies

  • #2
atyy
Science Advisor
13,741
1,902
Yes. In classical physics, probability arises from determinism and a lack of knowledge about some variables. So the question is: could there be hidden variables in quantum physics? Bell's theorem shows that quantum physics is incompatible with local hidden variables, but does not rule out nonlocal hidden variables. In non-relativistic quantum mechanics, Bohmian mechanics gives one hypothesis about what the hidden variables could be. For relativistic quantum mechanics, good hypotheses about possible hidden variables have not yet been constructed.
 
  • Like
Likes entropy1
  • #3
Strilanc
Science Advisor
588
210
A simple example of a deterministic theory of quantum mechanics is to have a computer compute the evolution of the wavefunction and use a cryptographic pseudo-random number generator to do the collapse. This is an example of a non-local hidden variable theory; the "hidden variables" are just the wavefunction's amplitudes and the state of the CPRNG.

Another example is many-worlds, where there simply isn't any collapse (which is the only "random" thing).
 
Last edited:
  • Like
Likes entropy1
  • #4
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,223
3,113
A simple example of a deterministic theory of quantum mechanics is to have a computer compute the evolution of the wavefunction and use a cryptographic pseudo-random number generator to do the collapse.
This gives only an approximation to quantum mechanics, and for very small systems only.
 
  • #5
Strilanc
Science Advisor
588
210
This gives only an approximation to quantum mechanics, and for very small systems only.
How so? This setup should be observationaly indistinguishable from whatever "real" quantum mechanics you have in mind, so I'm not sure what you mean by it being an approximation. And why would the system have to be small? I certainly agree that the computer you need to do this computation is exponentially large in the size of the system, but the goal wasn't to come up with an efficient deterministic model.
 
  • #6
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,223
3,113
How so? This setup should be observationally indistinguishable from whatever "real" quantum mechanics you have in mind
Computer simulations of quantum systems are not observationally indistinguishable from real systems. They provide coarse approximations only, of unknown accuracy once the system is nontrivial.

Already for a single water molecule you need to solve a partial differential equation in 9 variables. What seems to be collapse is interaction with the environment. And that has an infinite number of degrees of freedom. Too much for an ordinary computer to simulate.

And what about the dynamic folding of a protein molecule immersed in water? Now you have PDEs in several thousand variables....

In your set-up you need to come up for each problem class with a new algorithm - not good for a ''deterministic theory''!
 
  • #7
Strilanc
Science Advisor
588
210
I think you might be confusing computability with computational complexity. I am not claiming my example has low complexity, only that it is computable. Do you agree that the process I specified is computable, specifically Turing-computable?

Computer simulations of quantum systems are not observationally indistinguishable from real systems. They provide coarse approximations only, of unknown accuracy once the system is nontrivial.
Can you give an example where substituting a real system with the process I specified will create an observational distinction that can be detected by an experiment? How would that experiment not reject all of quantum mechanics, since basically the program is just doing the math that quantum mechanics says you should do to predict what a system might do?

Already for a single water molecule you need to solve a partial differential equation in 9 variables. What seems to be collapse is interaction with the environment. And that has an infinite number of degrees of freedom. Too much for an ordinary computer to simulate.
The number of relevant degrees of freedom is not infinite. It is very large, but it is finite.

In your set-up you need to come up for each problem class with a new algorithm - not good for a ''deterministic theory''!
No, the same (inefficient) algorithm will handle every problem class. Just do the simplest possible thing and multiply the huge state vector by the even huger operation matrix. As you note, in practice this is not possible due to the blowup in costs. But it's a perfectly fine conceptual model, and it meets the criteria of computability.
 
  • #8
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,223
3,113
Do you agree that the process I specified is computable, specifically Turing-computable?
Not even ##e^x## is computable in the Turing sense.
The number of relevant degrees of freedom is not infinite. It is very large, but it is finite.
What do you regard as relevant? In QED (which is part of the environment), one has an infinite number of soft photons.
Just do the simplest possible thing and multiply the huge state vector by the even huger operation matrix.
Well, the state vector of an electron is already infinite-dimensional!
 
  • Like
Likes Michael Price
  • #9
Strilanc
Science Advisor
588
210
Not even ##e^x## is computable in the Turing sense.

What do you regard as relevant? In QED (which is part of the environment), one has an infinite number of soft photons.

Well, the state vector of an electron is already infinite-dimensional!
Just use finer and finer finite approximations to avoid issues with infinity. The goal of experimental indistinguishability does not require infinite precision.

I agree that a computer can't output the entire decimal representation of ##e^2##, since is has infinitely many digits. But this doesn't stop you from outputting a million digits; more than enough for any practical purpose. There is no digit of ##e^2## that is uncomputable, so for all intents and purposes you have access to the whole number as a computable thing.
 
  • #10
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,223
3,113
The goal of experimental indistinguishability does not require infinite precision.
But except for very simple problems, the dynamics is highly chaotic already at short time scales so that (like in weather forcasts, or in the Lorentz attractor on time scales of days rather than nanoseconds), the initial precision is most likely lost after a very short time.
 
  • #11
Strilanc
Science Advisor
588
210
But except for very simple problems, the dynamics is highly chaotic already at short time scales so that (like in weather forcasts, or in the Lorentz attractor on time scales of days rather than nanoseconds), the initial precision is most likely lost after a very short time.
Correct. However, one of the extremely convenient things about focusing on computability instead of complexity is the ability to fix that. As part of each time step, go back to the start and increase the precision of the whole computation by a factor of 100.
 
  • #12
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,223
3,113
The goal of experimental indistinguishability does not require infinite precision.
But it requires nonexistent analysis to know which precision is sufficient.
 
  • #13
Strilanc
Science Advisor
588
210
But it requires nonexistent analysis to know which precision is sufficient.
Do you really think that it's not possible to pick a function that grows sufficiently fast to exceed "sufficient"? At time step t, increase the precision by a factor of Ackermann(t, t).

Look, I realize that when I say "simulate it with a computer" that I am "lying to children" by leaving out all of the details that must be conteded with (such as infinities and precision and renormalization and gibbs ripples and on and on). But these are all solvable problems, and to the extent that they aren't solvable they also aren't solvable by a human sitting down with pen and paper trying to work out the behavior of the system. If it can't even be done in principle by a human with pen and paper and unlimited time, then I really don't see how it can be said to be part of the theory in the first place. And if it can be done by a human with pen and paper and unlimited time, then it is computable.
 
  • #14
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,223
3,113
At time step t, increase the precision by a factor of Ackermann(t, t).
But then you exceed very soon the resources available in the universe. Such a computer cannot exist.
 
  • #15
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,223
3,113
If it can't even be done in principle by a human with pen and paper and unlimited time, then I really don't see how it can be said to be part of the theory in the first place.
A human can write down with pen and paper ##a_{k+1}=Ackermann(a_k,a_k)##, where ##a_0=1##, and consider ##A=Ackermann(a_{Ackermann(99,99)})## without ever having to compute it, which would exhaust any computer that can ever be built. Knowing a law is very different from being able to execute it.
 
  • #16
Strilanc
Science Advisor
588
210
But then you exceed very soon the resources available in the universe. Such a computer cannot exist.
So? We're not talking about complexity, we're talking about computability. Computability ignores resource constraints. I already mentioned this several times.

A human can write down with pen and paper ##a_{k+1}=Ackermann(a_k,a_k)##, where ##a_0=1##, and consider ##A=Ackermann(a_{Ackermann(99,99)})## without ever having to compute it, which would exhaust any computer that can ever be built. Knowing a law is very different from being able to execute it.
Again, we're not talking about complexity we're talking about computability.

I really feel like you're being purposefully obtuse here. Do you actually think symbolic manipulations are intrinsically necessary to simulate physics? And even if you do, computers can do symbolic manipulation just fine! Symbolic algebra packages are a dime a dozen.

Look, if there is any sense in which quantum mechanics actually explains the world, then there must be some mechanical process for taking the equations and turning them into predictions. If there isn't such a process, then quantum mechanics is not a predictive theory. It would be useless. Conversely, if there is such a process then a computer can do it. So just have the computer do that thing. I don't see how this could be controversial! Do you think quantum mechanics is requires a hyper computer or something?
 
  • #17
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,223
3,113
there must be some mechanical process for taking the equations and turning them into predictions. If there isn't such a process, then quantum mechanics is not a predictive theory. It would be useless.
Of course we can predict only what we can compute.

But this is completely unrelated to simulation using random numbers (as you stipulate) or computability in terms of astronomically large resources.

Quantum mechanics is primarily about understanding the laws of physics. Making correct predictions is just a check of having understood correctly. And the predictions count (scientifically and technologically) only if these predictions are made fast enough and with implementable resources - otherwise they are completely irrelevant..
 
  • #18
Boing3000
Gold Member
332
37
But this is completely unrelated to simulation using random numbers (as you stipulate) or computability in terms of astronomically large resources.
Maybe the issue here is the difference between "simulation" which suggest a projection into reality/4D event, and "computation" into a virtual Hilbert space.
How to do that projection without random numbers ? Isn't that projection needed to predict anything ?

You spoke of a large molecule as an example, how can a pure wave function computation give us any information about what a molecule will look like, or worse it's temporal dynamic ? Would it not actually look like some small fuzzy MWI view, impossible to map to observation ?
 
  • #19
entropy1
Gold Member
916
53
Of course we can predict only what we can compute.

But this is completely unrelated to simulation using random numbers (as you stipulate) or computability in terms of astronomically large resources.
But except for very simple problems, the dynamics is highly chaotic already at short time scales so that (like in weather forcasts, or in the Lorentz attractor on time scales of days rather than nanoseconds), the initial precision is most likely lost after a very short time.
Is it then the supposed probabilistic aspect of QT that diminishes its computability and causes the chaotic behaviour of it when simulating it? Because this probabilistic aspect would be easily implemented computationally. And now I come to think of it: if you implement randomness, then the evolution of the simulation will possibly not follow any other simulation or reality by definition, because the random values are different from reality! At least, if there is chaotic behaviour involved! :smile:
 
  • #20
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,223
3,113
how can a pure wave function computation give us any information about what a molecule will look like
Nothing can do that, not even a simulation. The latter can only give a possibility how it might look like.
You spoke of a large molecule as an example
Statistical mechanics predicts its properties in equilibrium, without any simulation. Much of quantum physics does not predict individual events but only scattering cross sections for tiny systems, which turn into deterministic reaction rates for macroscopic (nuclear or chemical) reactions. The latter are observable.
Is it then the supposed probabilistic aspect of QT that diminishes its computability and causes the chaotic behaviour of it when simulating it?
No; it is in the fact that realistic Hamiltonians have a spectrum that extends to infinity and hence contains arbitrarily fast time scales that influence the slower time scales. Chaoticity is visible in approximations that track only a limited number of degrees of freedom; it is the origin of decoherence (if you interpret the derivations in the appropriate way). Decoherence happens on exceedingly small time scales, unless one takes extreme care to isolate a (tiny) system form its environment.
 
  • Like
Likes Mentz114
  • #21
Boing3000
Gold Member
332
37
Statistical mechanics predicts its properties in equilibrium, without any simulation.
Well, I think that the linearity of those 'classical' predictions is confronted to the same problem, that is: "equilibrium" is not a thing in nature, and thus simulation is the only way to actually pin-point strange/chaotic "feature" that may be used on a practical/engineering point of view.

Much of quantum physics does not predict individual events but only scattering cross sections for tiny systems, which turn into deterministic reaction rates for macroscopic (nuclear or chemical) reactions. The latter are observable.
I understand that. You will observe some microscopical statistic. It is indeed a prediction, although not a precise one (the exact configuration)

But in this same scenario, if random even (like some tunneling) allow the protein two fold in some way (and many others at many places), the end results would be an excessively huge configuration of "shape". But if some of those protein "random" shape "mutations", turns out to be more stable, and maybe "contagious" to others, a few run of the "albeit imprecise" simulation would predict/show that.
Would a more "pure" computation could have predict that ? Or if it is in the Hilbert-space somewhere, it would be "undetectable" ?
Isn't the (pseudo)collapse to certain real(in 4D) configuration that would actually "speed-up" the process by removing less-likely branches ?
 
  • #22
entropy1
Gold Member
916
53
Does that mean QM could be intrinsicly deterministic?
If there are at least two different ways to interpret QM deterministically, isn't that odd? Can something be deterministic in two different ways?
 
  • #23
5,428
291
If there are at least two different ways to interpret QM deterministically, isn't that odd? Can something be deterministic in two different ways?
Classically, 'deterministic' usually means that given 1) initial conditions , 2) consistent equations of motion 3) a method of evolution that maintains the symmetries, we can predict exactly the value of dynamical variables in the future. Like the Hamiltonian method.

In QT the same thing applies to probability distributions of the values of dynamical variables. The probabilities evolve in time in a way that satisfies the conditions above and in this sense it is deterministic. But just knowing a probability is not. So QT is both deterministic and not - depending on how it is defined.

Hamilton's equation of motion can in fact be written in terms of the commutator of ##H## and ##t## just like Heisenbergs EOM.
 
  • Like
Likes entropy1

Related Threads on Probabilism and determinism

  • Last Post
3
Replies
68
Views
4K
Replies
20
Views
6K
Replies
3
Views
8K
Replies
34
Views
2K
Replies
94
Views
11K
Replies
2
Views
1K
Replies
5
Views
3K
Replies
6
Views
2K
Top