Can Quantum Mechanics Be Both Probabilistic and Deterministic?

In summary, the conversation discusses the compatibility of probabilistic and deterministic models in quantum mechanics. While Bell's theorem rules out local hidden variables, it does not rule out nonlocal hidden variables. One example of a deterministic theory is the use of a computer to simulate the wavefunction and a cryptographic pseudo-random number generator for collapse. However, this approach only gives an approximation of quantum mechanics and is computable but not efficient. The conversation also touches on the challenges of simulating quantum systems on a computer and the issue of infinite degrees of freedom in the environment. The goal of experimental indistinguishability does not require infinite precision, but it may not be achievable due to the chaotic nature of quantum dynamics.
  • #1
entropy1
1,230
71
If we use a probabilistic model of QM, is there still room for determinism? If we don't have knowledge of the exact outcomes, can there still be underlying determinism in such a model?

I am aware there are probabilistic and deterministic models available for QM. Does that mean QM could be intrinsicly deterministic? Or is it both probabilistic and deterministic?
 
Last edited:
Physics news on Phys.org
  • #2
Yes. In classical physics, probability arises from determinism and a lack of knowledge about some variables. So the question is: could there be hidden variables in quantum physics? Bell's theorem shows that quantum physics is incompatible with local hidden variables, but does not rule out nonlocal hidden variables. In non-relativistic quantum mechanics, Bohmian mechanics gives one hypothesis about what the hidden variables could be. For relativistic quantum mechanics, good hypotheses about possible hidden variables have not yet been constructed.
 
  • Like
Likes entropy1
  • #3
A simple example of a deterministic theory of quantum mechanics is to have a computer compute the evolution of the wavefunction and use a cryptographic pseudo-random number generator to do the collapse. This is an example of a non-local hidden variable theory; the "hidden variables" are just the wavefunction's amplitudes and the state of the CPRNG.

Another example is many-worlds, where there simply isn't any collapse (which is the only "random" thing).
 
Last edited:
  • Like
Likes entropy1
  • #4
Strilanc said:
A simple example of a deterministic theory of quantum mechanics is to have a computer compute the evolution of the wavefunction and use a cryptographic pseudo-random number generator to do the collapse.
This gives only an approximation to quantum mechanics, and for very small systems only.
 
  • #5
A. Neumaier said:
This gives only an approximation to quantum mechanics, and for very small systems only.

How so? This setup should be observationaly indistinguishable from whatever "real" quantum mechanics you have in mind, so I'm not sure what you mean by it being an approximation. And why would the system have to be small? I certainly agree that the computer you need to do this computation is exponentially large in the size of the system, but the goal wasn't to come up with an efficient deterministic model.
 
  • #6
Strilanc said:
How so? This setup should be observationally indistinguishable from whatever "real" quantum mechanics you have in mind
Computer simulations of quantum systems are not observationally indistinguishable from real systems. They provide coarse approximations only, of unknown accuracy once the system is nontrivial.

Already for a single water molecule you need to solve a partial differential equation in 9 variables. What seems to be collapse is interaction with the environment. And that has an infinite number of degrees of freedom. Too much for an ordinary computer to simulate.

And what about the dynamic folding of a protein molecule immersed in water? Now you have PDEs in several thousand variables...

In your set-up you need to come up for each problem class with a new algorithm - not good for a ''deterministic theory''!
 
  • #7
I think you might be confusing computability with computational complexity. I am not claiming my example has low complexity, only that it is computable. Do you agree that the process I specified is computable, specifically Turing-computable?

A. Neumaier said:
Computer simulations of quantum systems are not observationally indistinguishable from real systems. They provide coarse approximations only, of unknown accuracy once the system is nontrivial.

Can you give an example where substituting a real system with the process I specified will create an observational distinction that can be detected by an experiment? How would that experiment not reject all of quantum mechanics, since basically the program is just doing the math that quantum mechanics says you should do to predict what a system might do?

A. Neumaier said:
Already for a single water molecule you need to solve a partial differential equation in 9 variables. What seems to be collapse is interaction with the environment. And that has an infinite number of degrees of freedom. Too much for an ordinary computer to simulate.

The number of relevant degrees of freedom is not infinite. It is very large, but it is finite.

A. Neumaier said:
In your set-up you need to come up for each problem class with a new algorithm - not good for a ''deterministic theory''!

No, the same (inefficient) algorithm will handle every problem class. Just do the simplest possible thing and multiply the huge state vector by the even huger operation matrix. As you note, in practice this is not possible due to the blowup in costs. But it's a perfectly fine conceptual model, and it meets the criteria of computability.
 
  • #8
Strilanc said:
Do you agree that the process I specified is computable, specifically Turing-computable?
Not even ##e^x## is computable in the Turing sense.
Strilanc said:
The number of relevant degrees of freedom is not infinite. It is very large, but it is finite.
What do you regard as relevant? In QED (which is part of the environment), one has an infinite number of soft photons.
Strilanc said:
Just do the simplest possible thing and multiply the huge state vector by the even huger operation matrix.
Well, the state vector of an electron is already infinite-dimensional!
 
  • Like
Likes Michael Price
  • #9
A. Neumaier said:
Not even ##e^x## is computable in the Turing sense.

What do you regard as relevant? In QED (which is part of the environment), one has an infinite number of soft photons.

Well, the state vector of an electron is already infinite-dimensional!

Just use finer and finer finite approximations to avoid issues with infinity. The goal of experimental indistinguishability does not require infinite precision.

I agree that a computer can't output the entire decimal representation of ##e^2##, since is has infinitely many digits. But this doesn't stop you from outputting a million digits; more than enough for any practical purpose. There is no digit of ##e^2## that is uncomputable, so for all intents and purposes you have access to the whole number as a computable thing.
 
  • #10
Strilanc said:
The goal of experimental indistinguishability does not require infinite precision.
But except for very simple problems, the dynamics is highly chaotic already at short time scales so that (like in weather forcasts, or in the Lorentz attractor on time scales of days rather than nanoseconds), the initial precision is most likely lost after a very short time.
 
  • #11
A. Neumaier said:
But except for very simple problems, the dynamics is highly chaotic already at short time scales so that (like in weather forcasts, or in the Lorentz attractor on time scales of days rather than nanoseconds), the initial precision is most likely lost after a very short time.

Correct. However, one of the extremely convenient things about focusing on computability instead of complexity is the ability to fix that. As part of each time step, go back to the start and increase the precision of the whole computation by a factor of 100.
 
  • #12
Strilanc said:
The goal of experimental indistinguishability does not require infinite precision.
But it requires nonexistent analysis to know which precision is sufficient.
 
  • #13
A. Neumaier said:
But it requires nonexistent analysis to know which precision is sufficient.

Do you really think that it's not possible to pick a function that grows sufficiently fast to exceed "sufficient"? At time step t, increase the precision by a factor of Ackermann(t, t).

Look, I realize that when I say "simulate it with a computer" that I am "lying to children" by leaving out all of the details that must be conteded with (such as infinities and precision and renormalization and gibbs ripples and on and on). But these are all solvable problems, and to the extent that they aren't solvable they also aren't solvable by a human sitting down with pen and paper trying to work out the behavior of the system. If it can't even be done in principle by a human with pen and paper and unlimited time, then I really don't see how it can be said to be part of the theory in the first place. And if it can be done by a human with pen and paper and unlimited time, then it is computable.
 
  • #14
Strilanc said:
At time step t, increase the precision by a factor of Ackermann(t, t).
But then you exceed very soon the resources available in the universe. Such a computer cannot exist.
 
  • #15
Strilanc said:
If it can't even be done in principle by a human with pen and paper and unlimited time, then I really don't see how it can be said to be part of the theory in the first place.
A human can write down with pen and paper ##a_{k+1}=Ackermann(a_k,a_k)##, where ##a_0=1##, and consider ##A=Ackermann(a_{Ackermann(99,99)})## without ever having to compute it, which would exhaust any computer that can ever be built. Knowing a law is very different from being able to execute it.
 
  • #16
A. Neumaier said:
But then you exceed very soon the resources available in the universe. Such a computer cannot exist.

So? We're not talking about complexity, we're talking about computability. Computability ignores resource constraints. I already mentioned this several times.

A. Neumaier said:
A human can write down with pen and paper ##a_{k+1}=Ackermann(a_k,a_k)##, where ##a_0=1##, and consider ##A=Ackermann(a_{Ackermann(99,99)})## without ever having to compute it, which would exhaust any computer that can ever be built. Knowing a law is very different from being able to execute it.

Again, we're not talking about complexity we're talking about computability.

I really feel like you're being purposefully obtuse here. Do you actually think symbolic manipulations are intrinsically necessary to simulate physics? And even if you do, computers can do symbolic manipulation just fine! Symbolic algebra packages are a dime a dozen.

Look, if there is any sense in which quantum mechanics actually explains the world, then there must be some mechanical process for taking the equations and turning them into predictions. If there isn't such a process, then quantum mechanics is not a predictive theory. It would be useless. Conversely, if there is such a process then a computer can do it. So just have the computer do that thing. I don't see how this could be controversial! Do you think quantum mechanics is requires a hyper computer or something?
 
  • #17
Strilanc said:
there must be some mechanical process for taking the equations and turning them into predictions. If there isn't such a process, then quantum mechanics is not a predictive theory. It would be useless.
Of course we can predict only what we can compute.

But this is completely unrelated to simulation using random numbers (as you stipulate) or computability in terms of astronomically large resources.

Quantum mechanics is primarily about understanding the laws of physics. Making correct predictions is just a check of having understood correctly. And the predictions count (scientifically and technologically) only if these predictions are made fast enough and with implementable resources - otherwise they are completely irrelevant..
 
  • #18
A. Neumaier said:
But this is completely unrelated to simulation using random numbers (as you stipulate) or computability in terms of astronomically large resources.
Maybe the issue here is the difference between "simulation" which suggest a projection into reality/4D event, and "computation" into a virtual Hilbert space.
How to do that projection without random numbers ? Isn't that projection needed to predict anything ?

You spoke of a large molecule as an example, how can a pure wave function computation give us any information about what a molecule will look like, or worse it's temporal dynamic ? Would it not actually look like some small fuzzy MWI view, impossible to map to observation ?
 
  • #19
A. Neumaier said:
Of course we can predict only what we can compute.

But this is completely unrelated to simulation using random numbers (as you stipulate) or computability in terms of astronomically large resources.
A. Neumaier said:
But except for very simple problems, the dynamics is highly chaotic already at short time scales so that (like in weather forcasts, or in the Lorentz attractor on time scales of days rather than nanoseconds), the initial precision is most likely lost after a very short time.
Is it then the supposed probabilistic aspect of QT that diminishes its computability and causes the chaotic behaviour of it when simulating it? Because this probabilistic aspect would be easily implemented computationally. And now I come to think of it: if you implement randomness, then the evolution of the simulation will possibly not follow any other simulation or reality by definition, because the random values are different from reality! At least, if there is chaotic behaviour involved! :smile:
 
  • #20
Boing3000 said:
how can a pure wave function computation give us any information about what a molecule will look like
Nothing can do that, not even a simulation. The latter can only give a possibility how it might look like.
Boing3000 said:
You spoke of a large molecule as an example
Statistical mechanics predicts its properties in equilibrium, without any simulation. Much of quantum physics does not predict individual events but only scattering cross sections for tiny systems, which turn into deterministic reaction rates for macroscopic (nuclear or chemical) reactions. The latter are observable.
entropy1 said:
Is it then the supposed probabilistic aspect of QT that diminishes its computability and causes the chaotic behaviour of it when simulating it?
No; it is in the fact that realistic Hamiltonians have a spectrum that extends to infinity and hence contains arbitrarily fast time scales that influence the slower time scales. Chaoticity is visible in approximations that track only a limited number of degrees of freedom; it is the origin of decoherence (if you interpret the derivations in the appropriate way). Decoherence happens on exceedingly small time scales, unless one takes extreme care to isolate a (tiny) system form its environment.
 
  • Like
Likes Mentz114
  • #21
A. Neumaier said:
Statistical mechanics predicts its properties in equilibrium, without any simulation.
Well, I think that the linearity of those 'classical' predictions is confronted to the same problem, that is: "equilibrium" is not a thing in nature, and thus simulation is the only way to actually pin-point strange/chaotic "feature" that may be used on a practical/engineering point of view.

A. Neumaier said:
Much of quantum physics does not predict individual events but only scattering cross sections for tiny systems, which turn into deterministic reaction rates for macroscopic (nuclear or chemical) reactions. The latter are observable.
I understand that. You will observe some microscopical statistic. It is indeed a prediction, although not a precise one (the exact configuration)

But in this same scenario, if random even (like some tunneling) allow the protein two fold in some way (and many others at many places), the end results would be an excessively huge configuration of "shape". But if some of those protein "random" shape "mutations", turns out to be more stable, and maybe "contagious" to others, a few run of the "albeit imprecise" simulation would predict/show that.
Would a more "pure" computation could have predict that ? Or if it is in the Hilbert-space somewhere, it would be "undetectable" ?
Isn't the (pseudo)collapse to certain real(in 4D) configuration that would actually "speed-up" the process by removing less-likely branches ?
 
  • #22
entropy1 said:
Does that mean QM could be intrinsicly deterministic?
If there are at least two different ways to interpret QM deterministically, isn't that odd? Can something be deterministic in two different ways?
 
  • #23
entropy1 said:
If there are at least two different ways to interpret QM deterministically, isn't that odd? Can something be deterministic in two different ways?
Classically, 'deterministic' usually means that given 1) initial conditions , 2) consistent equations of motion 3) a method of evolution that maintains the symmetries, we can predict exactly the value of dynamical variables in the future. Like the Hamiltonian method.

In QT the same thing applies to probability distributions of the values of dynamical variables. The probabilities evolve in time in a way that satisfies the conditions above and in this sense it is deterministic. But just knowing a probability is not. So QT is both deterministic and not - depending on how it is defined.

Hamilton's equation of motion can in fact be written in terms of the commutator of ##H## and ##t## just like Heisenbergs EOM.
 
  • Like
Likes entropy1

1. What is the difference between probabilism and determinism?

Probabilism and determinism are two contrasting perspectives on the nature of reality. Probabilism suggests that events are not predetermined and can only be described in terms of probabilities. On the other hand, determinism proposes that all events are determined by previous causes and that there is no room for chance or randomness.

2. Is probabilism compatible with the concept of free will?

Some argue that probabilism is compatible with free will, as it allows for the possibility of making choices and decisions that are not predetermined. However, others argue that determinism is necessary for true free will, as it suggests that our actions are determined by our beliefs and desires.

3. How does probabilism explain the uncertainty of future events?

According to probabilism, the future is uncertain because there is always a level of randomness and chance involved in events. While we can make predictions based on probabilities, we cannot know for certain what will happen in the future.

4. Can both probabilism and determinism coexist in the scientific understanding of the universe?

Some scientists believe that both probabilism and determinism can coexist and complement each other in understanding the universe. For example, quantum mechanics incorporates probabilistic elements while also acknowledging the deterministic laws of nature.

5. How does the concept of causality relate to determinism?

Determinism is based on the belief that all events are caused by prior events and that there is a chain of cause and effect that determines the course of events. This concept of causality is closely tied to determinism and is used to explain the predictability and regularity of the universe.

Similar threads

Replies
8
Views
995
Replies
3
Views
723
Replies
21
Views
986
Replies
4
Views
1K
Replies
2
Views
810
Replies
4
Views
1K
Replies
5
Views
809
Replies
3
Views
791
Replies
9
Views
2K
Replies
86
Views
9K
Back
Top