Schrödinger local and deterministic?

Click For Summary
The discussion centers on whether the Schrödinger equation can be considered both local and deterministic. It is established that the equation is mathematically local and deterministic, as it describes a unique time evolution based on initial conditions. However, practical limitations in measuring macroscopic systems lead to a reliance on statistical approximations, introducing uncertainty and a non-deterministic view of these systems. The conversation also touches on the implications of the Heisenberg Uncertainty Principle and Bell's theorem, which suggest that quantum mechanics cannot be fully reconciled with classical notions of locality and determinism. Ultimately, while the Schrödinger equation itself is deterministic, the complexities of quantum mechanics and measurement introduce challenges to this interpretation.
Gerenuk
Messages
1,027
Reaction score
5
There have been many QM Interpretation thread, but I haven't found this question answered:

Taking aside the fact that a complex probability amplitude is not something we can picture, is the Schrödinger equation local and deterministic at once?
 
Physics news on Phys.org
Yes, the Schrodinger equation and the evolution of the wavefunction that follows from it is local and unitary. Unitary means that the time evolution of the wavefunction is unique and completely determined by the initial conditions. It is therefore deterministic.

There is, however, a major practical obstruction that prevents us from actually calculating this time evolution for any macroscopic system. This is particlely because it is practically impossible to determine the initial state of a macroscopic system. But even if we did know this state or if we are somehow able to finetune it, the time-evolution itself is a many-body problem which is, again, computationably intractable.

We therefore always need to resort to some form of approximation, e.g. a statistical description of the system or ignoring a large number of degrees of freedom. Such a statistical description automatically introduces a degree of 'uncertainty' which manifests itself as a non-determinstic description of the system.

So even if you put the whole measurement problem aside, you still end up with a non-deterministic description of macroscopic systems due to practical limitations.
 
So how does that compare to the saying "QM can't be local and deterministic" by Bell's theorem and similar ones?
 
Bell's theorem plus the results of experiments testing it (insofar as one accepts those results and/or the validity of Bell's theorem with respect to those experiments, which is the source of the vigorous arguments here about the subject :smile:) support the statement that "QM can't be local and realistic", which is not the same thing as deterministic.
 
Yes sure. And I do not wish to start yet another Bell discussion.

But what's wrong about saying the Schrödinger equation is local and deterministic?
Mathematically it does look so.
 
Gerenuk said:
Yes sure. And I do not wish to start yet another Bell discussion.

But what's wrong about saying the Schrödinger equation is local and deterministic?
Mathematically it does look so.

I believe you said it, "...the fact that a complex probability amplitude is not something we can picture..." is the reason. Well, if it can only exist as math, it's not physics, just math. So, from the perspective of a mathematician... it is as you say. From the perspective of a Physicist... it is too, but it's not useful if it can't be made to do work. Hence all of the rest... so I'd say to answer your question: To avoid confusion.
 
I guessed so. Now I'm trying to get some ideas to understand how determinism gets lost... :)
 
Gerenuk said:
I guessed so. Now I'm trying to get some ideas to understand how determinism gets lost... :)

See, that's not too hard, because Determinism is lost when we have to calculate positions, velocities, etc... as probabilites. It all comes from the Heisenberg Uncertainty Principle (HUP), now backed up by the CMB surveys.
 
Frame Dragger said:
See, that's not too hard, because Determinism is lost when we have to calculate positions, velocities, etc... as probabilites. It all comes from the Heisenberg Uncertainty Principle (HUP), now backed up by the CMB surveys.

I don't agree that the UP supports an external indeterminism in events; The UP does highlight however our lack of knowledge on a system. Just because there is a lack of knowledge from our behalf should not suggest that the universe is not deterministic.
 
  • #10
ManyNames said:
I don't agree that the UP supports an external indeterminism in events; The UP does highlight however our lack of knowledge on a system. Just because there is a lack of knowledge from our behalf should not suggest that the universe is not deterministic.

The CMB would beg to differ, barring a superdeterministic uneven distribution of "stuff" at 360K years post-BB...
 
  • #11
Calculating probabilities is fine. The problem comes in when someone tries to make a theory that works with probabilities alone.

So if ppl wouldn't try to squeeze QM into basic probability theories, then QM would be local, deterministic and even linear?

Maybe some sophisticated ingredient can make even the probabilities logical again.
 
  • #12
Gerenuk said:
Calculating probabilities is fine. The problem comes in when someone tries to make a theory that works with probabilities alone.

So if ppl wouldn't try to squeeze QM into basic probability theories, then QM would be local, deterministic and even linear?

Maybe some sophisticated ingredient can make even the probabilities logical again.

Time to start building the AI's that can find that... maybe they'll even be nice enough to try and explain it to us! :wink:
 
  • #13
Frame Dragger said:
The CMB would beg to differ, barring a superdeterministic uneven distribution of "stuff" at 360K years post-BB...

No i beg to differ, because the UP is in light of what we can know - its a limitation of knowledge which does not impede determinism.
 
  • #14
ManyNames said:
No i beg to differ, because the UP is in light of what we can know - its a limitation of knowledge which does not impede determinism.

Ok... then how is it that something which is a limitation on KNOWLEDGE managed to effect the (should-have-been-EVEN) distribution of "stuff" in the early universe? The HUP explains that nicely, as does SUPERdeterminism. The HUP + Determinism = Horse****.
 
  • #15
Wasn't the fundamental problem that the equation didn't properly model the interaction between particle and wave as de Broglie envisioned? (It rather just models "some wave" of unknown origin and constitution)
 
  • #16
PhilDSP said:
Wasn't the fundamental problem that the equation didn't properly model the interaction between particle and wave as de Broglie envisioned? (It rather just models "some wave" of unknown origin and constitution)

Yeah... sadly yes... and the Bohmian interpretation replaces that issue with a Pilot wave of "unknown origin and constitution" as you put it so well. Welcome to QM... I need some aspirin. :wink:

EDIT: Hence us left with 50-50 chances, or worse, 50-50-1! Never good when you get 101% in a physical theory...
 
  • #17
Frame Dragger said:
Ok... then how is it that something which is a limitation on KNOWLEDGE managed to effect the (should-have-been-EVEN) distribution of "stuff" in the early universe? The HUP explains that nicely, as does SUPERdeterminism. The HUP + Determinism = Horse****.

You do realize that particles are simply statistical averages right? Physics in general is a statistical theory at best yes? It's statistical because we don't have all the knowledge on a quanum system, but this is because of our lack of knowledge, not because there needs to be an indeterministic world externally of our limited knowledges.
 
  • #18
ManyNames said:
You do realize that particles are simply statistical averages right? Physics in general is a statistical theory at best yes? It's statistical because we don't have all the knowledge on a quanum system, but this is because of our lack of knowledge, not because there needs to be an indeterministic world externally of our limited knowledges.

You keep saying this, but this is generally rejected as a viewpoint. The HUP is not about lack of knowledge, although at one time that was a common belief. It is generally held that particles have attributes only within the context of a measurement.
 
  • #19
ManyNames said:
You do realize that particles are simply statistical averages right? Physics in general is a statistical theory at best yes? It's statistical because we don't have all the knowledge on a quanum system, but this is because of our lack of knowledge, not because there needs to be an indeterministic world externally of our limited knowledges.

To paraphrase DrChinese in my own words, representing my own opinion, "No, I don't realize that, because observational data has shown the HUP is a physical law, not merely a statistal event horizon for observers."
 
  • #20
Gerenuk said:
There have been many QM Interpretation thread, but I haven't found this question answered:

Taking aside the fact that a complex probability amplitude is not something we can picture, is the Schrödinger equation local and deterministic at once?

Classical determinism: Repeating the same experiment many times always has the same result. Classical mechanics allows us to determine that result.
Quantum determinism: Repeating the same experiment many times yields a unique probability distribution of all possible results. Quantum mechanics allows us to determine that probability distribution.
Quantum mechanics does not predict the experimental result; it is not deterministic in the classical sense.

Locality is a property of the space-time of classical physics. It is classical in nature. The wavefunction (probability amplitude) is defined in a Hilbert space. It seems to me that locality is an issue only if the wavefunction propagates in space-time, as many believe.

In the classical sense, quantum mechanics is neither deterministic nor local.
 
  • #21
Gerenuk said:
I guessed so. Now I'm trying to get some ideas to understand how determinism gets lost... :)

Hello Gerenuk,
The answer to your question is simple and i am surprised that nobody has given it yet.

The quantum theory relies on two processes. One deterministic process, called U, like Unitary, governed by Schrödinger's equation, and a probabilistic process, called R, like Reduction, governed by Born's rule.

Both are needed for the theory to actually work. The loss of determinism occurs inside the R process, which has nothing to do with Schrodinger's equation.
 
  • #22
Pio2001 said:
Hello Gerenuk,
The answer to your question is simple and i am surprised that nobody has given it yet.
I'd be very, very careful with such a statement ;-)
Usually the guys crying out "it's so easy!", don't have the slightest clue what the problem is about.
This observation doesn't apply here, but it is one thing to remember :)

Pio2001 said:
Both are needed for the theory to actually work. The loss of determinism occurs inside the R process, which has nothing to do with Schrodinger's equation.
To my knowledge the R process is ill-defined, so it's hard to use it for arguments. I mean when is an observation an observation? Why don't we consider the human being as quantum objects and thus have U processes only?
And how does this R process lose locality or determinism?
For me it's very important not to just know a keyword, but to really understand where mathematically either locality or determinism is lost. Or why at all some people say it is lost, whereas all the theory seems to be based on local and deterministic concepts?
 
  • #23
Pio2001 said:
Hello Gerenuk,
The answer to your question is simple and i am surprised that nobody has given it yet.

The quantum theory relies on two processes. One deterministic process, called U, like Unitary, governed by Schrödinger's equation, and a probabilistic process, called R, like Reduction, governed by Born's rule.

Well, that is no longer agreed upon I think, since decoherence is now a well-established experimental and theoretical phenomenon that shows it is possible to have very rapid processes that proceed in a unitary fashion according to the TDSE, yet produce observations that are consistent with the original "collapse" (or reduction) theories. In fact, you will see the phrase "there is no collapse" thrown around a lot on this forum.

Both are needed for the theory to actually work. The loss of determinism occurs inside the R process, which has nothing to do with Schrodinger's equation.

I would say that it is very much an open question whether or not there is in fact a loss or determinism as you claim.
 
  • #24
Gerenuk said:
To my knowledge the R process is ill-defined, so it's hard to use it for arguments. I mean when is an observation an observation? Why don't we consider the human being as quantum objects and thus have U processes only?

Because the R process makes experimental predictions that the U process doesn't. Example, that YOU will get this or that result when you measure a given system in a given way. If you keep the U process only and use it to built a many world interpretation, you loose the definition of "you", and the above experimental prediction is no more defined.

Gerenuk said:
And how does this R process lose locality or determinism?

The R process lacks determinism in its axiomatic definition, and says nothing about locality.

Later, Bell, CHSH, GHZ, and Mermin (excuse me if I forget some), have shown that locality and determinism could not coexist. In a larger context, we can say that locality, determinism and realism can't coexist in quantum mechanics.

Some people however have suggested workarounds. Mark Rubin, for example, in his article about local realism in the Heisenberg picture of operators in the MWI, or JesseM in this forum, with his idea about pasting parallel universe when their future light-cones meet (which is more or less the same idea, as far as I understand). These ideas deserve to be developed. I'm working on JesseM's idea in my spare time.

Gerenuk said:
For me it's very important not to just know a keyword, but to really understand where mathematically either locality or determinism is lost. Or why at all some people say it is lost, whereas all the theory seems to be based on local and deterministic concepts?

They are lost when you violate Bell's inequality in an EPR-like experiment. No modelization of the experiment have been given yet that
1) Describe what happens in terms of realistic objects
2) Predicts the violation of the inequality by means of the above description

SpectraCat said:
Well, that is no longer agreed upon I think, since decoherence is now a well-established experimental and theoretical phenomenon that shows it is possible to have very rapid processes that proceed in a unitary fashion according to the TDSE, yet produce observations that are consistent with the original "collapse" (or reduction) theories.

Consistent yes, but with not as much predictive power. They do not predict the violation of the inequality without completing decoherence with the last part of the R process, which consists in picking one of the possible results out of many, in an undeterministic way.

SpectraCat said:
I would say that it is very much an open question whether or not there is in fact a loss or determinism as you claim.

I don't disagree, but Gerenuk's question was simple, and I gave the simple answer, from which we can go on and start further discussions :smile:
 
  • #25
Pio2001 said:
Because the R process makes experimental predictions that the U process doesn't. Example, that YOU will get this or that result when you measure a given system in a given way.
I don't think this R process idea is a satisfactory explanation. And the many attempts for interpretations probably share the same view. It's not well defined when someone is measuring and when he isn't and what reality means.

Pio2001 said:
Later, Bell, CHSH, GHZ, and Mermin (excuse me if I forget some), have shown that locality and determinism could not coexist. In a larger context, we can say that locality, determinism and realism can't coexist in quantum mechanics.
I do not want to discuss their work. I'll go through it later, but I know they all make their own hidden assumptions. Anyway:

If I let the universe run for a long time governed by the Schrödinger equation, and if I make one measurement in the end, then everything was a local and deterministic U process to the very end and I can extract probabilities from this l&d process? Right?

And then someone else comes along and says, I'm only a stupid quantum process and he is the real observer and waiting for an even longer time than me before he does the measurement. So in his theory everything was l&d an even longer time?!

It seem everything is l&d at all times. (unless you insist on removing the wavefunction and introduce real probabilities)
 
  • #26
Gerenuk said:
I don't think this R process idea is a satisfactory explanation.

It depends on the explanation you're looking for.

If you want an explanation about the presence of probabilities in the theory in spite of Shrödinger's equation, then the R process is a good answer.

If you want an explanation of reality, then the R process shows severe limitations.

Gerenuk said:
If I let the universe run for a long time governed by the Schrödinger equation, and if I make one measurement in the end, then everything was a local and deterministic U process to the very end and I can extract probabilities from this l&d process? Right?

I'm not sure... If we try to modelize Alice and Bob's EPR experiment this way, we'll run into their decoherence into two different preferred basis. If we simplify their states enough, we don't have the information necessary to get the final probabilities giving the inequality violation. We must not simplify, but keep the initial entanglement scattered into the billions of particles of their environment.
I don't know exactly how it works, nor even if the description of the system in terms of density matrixes holds enough information in order to predict Alice and Bob's correlations.

And if you describe them as wave vector, it doesn't work, because you'd have to consider them as, for example, half-spin particle in order to derive the right probabilities, which they are not : Alice with her detector turned rightside, summed with Alice with her detector turned leftside does not equal Alice with her detector turned upside, because these three state vectors are orthogonal (they're three eigenstates of the position).
However, making this summation is mandatory in order to get the inequality violated !

Don't think in terms of Schrödinger's cats. You can describe them in local and deterministic ways.
Think about Alice and Bob measuring half-spin particles across alpha and beta directions and getting a correlation of -cos(beta-alpha). That's much trickier.
 
  • #27
...or maybe Alice's summation holds true thanks to constructive interferences scattered through the environment's microscopic state ?
 
  • #28
Pio2001 said:
nor even if the description of the system in terms of density matrixes holds enough information in order to predict Alice and Bob's correlations.

I'm not used to the density matrix formalism, but after a second thought, I think that it holds the necessary information. However, it is non-local.

I have already tried something like you suggest : keep the U evolution until the end and get the probabilities when Alice and Bob meet.
The problem is that in order to have the right probabilities, you need Alice and Bob to be themselves elementary particles with half spin properties, with the problem mentionned above.
Another solution is to have them decohered before they meet, but you then need to associate amplitudes to their decohered copies in order to predict the inequality violation, and in this case, the amplitude of Bob's copies must depend on the angle chosen by Alice and vice versa ! Which is non-local. Otherwise, the inequality is not violated.

I eventually found a solution, relying on the many-world formalism. It can be summarized in three additional postulates.
1)When a system splits into many worlds, its copies keep track of the wave function that caused it to split.
2)When a system splits into many worlds, it also keeps track of a unique label that makes it distinguishable from any other similar system.

-> One split = one wave function = one label = many worlds (each one described by a wave function).

3)When some local parts of a split system meet, its copies recognize each other by means of the label given in postulate 2. Then the probabilities for a local copy to meet with one or the other copy from another place is given by the usual born rule applied to the initial wave function that the copies carry with them according to postulate 1.

This way I think it works. I have still to write the whole calculus.
 
  • #29
First, I'd like to see mathematically how locality is lost. It's the R process? Is it possible to write it down, so that it looks non-local from the mathematical view?

I'm not sure I've understood your explanation fully. Why can't I say B knows about A's complete state and her measurement apparatus completely, and thus is able to model all of A's measurement from an external view with U processes only?
 
Last edited:
  • #30
Hello Gerenuk,
I can do that. I'll try tonight (CET) with entangled photons. They are more interesting than spin 1/2 particles, because they are destroyed during the measurment.
 

Similar threads

  • · Replies 39 ·
2
Replies
39
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 143 ·
5
Replies
143
Views
11K
  • · Replies 4 ·
Replies
4
Views
2K