I Quantum mechanics is not weird, unless presented as such

Click For Summary
Quantum mechanics is often perceived as "weird," a notion that some argue hinders true understanding, particularly for students. Critics of this characterization suggest that quantum mechanics can be derived from reasonable assumptions without invoking measurement devices, which they claim is essential for a valid derivation. The discussion highlights the inadequacy of certain interpretations, like the ensemble interpretation, which relies on observations that may not have existed in the early universe. Participants emphasize the need for clearer explanations of quantum mechanics that bridge the gap between complex theories and public understanding. Ultimately, while quantum mechanics may seem strange, especially to laypersons, it can be presented in a way that aligns more closely with classical mechanics.
  • #271
zonde said:
question whether reality is local is rather much harder
I don't think reality is local in Bell's sense. It is local in the sense of QFT, but these are completely different concepts.

But I also don't think that nonlocality alone makes QM weird but only nonlocality together with poor classical language for quantum phenomena.
 
  • Like
Likes vanhees71
Physics news on Phys.org
  • #272
A. Neumaier said:
I don't think reality is local in Bell's sense. It is local in the sense of QFT, but these are completely different concepts.
I am trying not to get lost in all the different locality concepts. So I will hold on to this concept: Measurement result at one location is not influenced by (measurement) parameters at other spacelike distant location.
But what is local in QFT sense?

A. Neumaier said:
But I also don't think that nonlocality alone makes QM weird but only nonlocality together with poor classical language for quantum phenomena.
If you take nonlocality as some FTL phenomena then it s not so weird. On the other hand if you take nonlocality as a totally novel philosophical concept like "distance is illusion" then it's totally weird and incompatible with (philosophical) realism.
Speaking about classical language I think that problem is in lack of common agreement what classical concepts can be reviewed and which ones are rather fundamental to science.
Say particles is just a model so it can be reviewed. But you have to demonstrate that you can recover predictions from particle based models or recover particles at some limit.
 
  • #273
zonde said:
But you have to demonstrate that you can recover predictions from particle based models or recover particles at some limit.
This had already been demonstrated long before the advent of quantum mechanics. There is a well-known way to recover particles from fields called geometric optics. The particle concept is appropriate (and conforms with the intuition about classical particles) precisely when the conditions for approximating wave equations by geometric optics are applicable.

zonde said:
what is local in the QFT sense?
It means: ''observable field quantities ##\phi(x_1),\ldots,\phi(x_n)## commute if their arguments are mutually spacelike.'' This is the precise formal definition.
As a consequence (and, conversely, as an informal motivation for this condition), these quantities can (at least in principle) be independently prepared.

It is not a statement about measurement, which even in the simplest case is a complicated statistical many-particle problem, since a permanent record must be formed through the inherent quantum dynamics of system+measuring device+environment.

That the traditional quantum foundations take a human activity, the measurement process, as fundamental for the foundations is peculiar to quantum mechanics and part of the reason why the interpretation in these terms leads to many weird situations.
 
Last edited:
  • Like
Likes vanhees71
  • #274
A. Neumaier said:
these quantities can (at least in principle) be independently prepared.
Note that unlike measurement, which didn't exist before mankind learned to count, preparation is not primarily a human activity but something far more objective.

Nature itself prepares all the states that can actually be found in Nature - water in a state of local equilibrium, alpha, beta and gamma-rays, chiral molecules in a left-handed or right-handed state rather than their superposition, etc. - without any special machinery and without any human being having to do anything or to be around. While we can prepare something only if we know Nature well enough to control these preparation processes. That's the art of designing experiments.
 
Last edited:
  • #275
If I have understood your position accurately, you've suggested that all observables have a definite state at all times regardless of whether they are in principle measurable/observable. So, I assume that in your conception of the yet to be fully developed quantum field theory, the unitary evolution of the cosmological quantum field is entirely deterministic. Yes?
 
  • #276
Feeble Wonk said:
the unitary evolution of the cosmological quantum field is entirely deterministic. Yes?
Yes. It is only surprising and looks probabilistic to us, because we do only know a very small part of its state. (This is one of the reasons I believe also in strong AI. But if you want to discuss this, please don't do it here but open a new thread in the appropriate place!)
 
  • #277
A. Neumaier said:
Yes. It is only surprising and looks probabilistic to us, because we do only know a very small part of its state. (This is one of the reasons I believe also in strong AI. But if you want to discuss this, please don't do it here but open a new thread in the appropriate place!)

Sorry. I'm confused by the "looks probabilistic" reference.
 
  • #278
Feeble Wonk said:
Sorry. I'm confused by the "looks probabilistic" reference.
A. Neumaier said:
It is only surprising and looks probabilistic to us, because we do only know a very small part of its state.
Well, if one takes a determinstic dynamical system and look at part of it without knowing the (classical deterministic) state of the remainder (except very roughly) one can no longer make deterministic predictions. But if the part one knows is sufficiently well chosen and one doesn't demand too high accuracy of the predictions (or predictions for too long times) then one can still give a probabilistic reduced dynamics for the known part of the system. Physicists learned with time which systems have this property!

Weather forecast is an example in question. This is considered a completely classical dynamics, but because we have incomplete information we can only make stochastic models for the part we can get data for.

The physical process by which one gets the reduced system description is, on the most accurate level, always the same. It is called the projection operator formalism. There are also technically simpler but less accurate techniques.
 
  • #279
A. Neumaier said:
Well, if one takes a determinstic dynamical system and look at part of it without knowing the (classical deterministic) state of the remainder (except very roughly) one can no longer make deterministic predictions. But if the part one knows is sufficiently well chosen and one doesn't demand too high accuracy of the predictions (or predictions for too long times) then one can still give a probabilistic reduced dynamics for the known part of the system. Physicists learned with time which systems have this property!

Weather forecast is an example in question. This is considered a completely classical dynamics, but because we have incomplete information we can only make stochastic models for the part we can get data for.

The physical process by which one gets the reduced system description is, on the most accurate level, always the same. It is called the projection operator formalism. There are also technically simpler but less accurate techniques.

Yes, and I think that the relationship between determinism and apparent randomness gets at the heart of what is different about quantum mechanics.

Classical systems are nondeterministic for two reasons:
  1. We only know the initial conditions to a certain degree of accuracy. There are many possible states that are consistent with our finite knowledge, and those different states, when evolved forward in time, eventually become macroscopically distinguishable. So future macroscopic conditions are not uniquely determined by present macroscopic conditions.
  2. We only know the conditions in one limited region. Eventually, conditions in other regions will have an effect on this region, and that effect is not predictable.
If we assume (as Einstein did) that causal influences propagate at lightspeed or slower, then we can eliminate the second source of nondeterminism; we don't need to know what conditions are like everywhere, just in the backward lightcone of where we are trying to make a prediction.

So the real weirdness of quantum mechanics is that we have a nondeterminism that doesn't seem to be due to lack of information about the details of the present state.

Or we can put it a different way: Quantum mechanics has a notion of "state" for a system, namely the density matrix, which evolves deterministically with time. But that notion of state does not describe what we actually observe, which is definite outcomes for measurements. The density matrix may describe a system as a 40/60 mixture of two different eigenstates, while our observations show a definite value for whatever observable we measure. So what is the relationship between what we see (definite, nondeterministic results) and what QM describes (deterministic evolution without sharp values for observables)? You could take the approach in classical statistical mechanics; the state (the partition function, or whatever) does not describe a single system, but describes an ensemble of similarly-prepared systems.

But in the case of classical statistical mechanics, it's believed that there are microscopic differences between members of the ensemble, and that these microscopic differences are only captured statistically by the thermodynamic state. It's believed that each member of the ensemble is actually governed by Newtonian physics. So in classical statistical mechanics, there are two different levels of description: A specific element of an ensemble can be described using Newton's laws of motion, while we can take a statistical average over many such elements to get a thermodynamic description, which is more manageable than Newton when the number of components becomes huge.

So if the relationship between the QM state and the actual observed world is the same as for classical statistical mechanics, that QM provides an ensemble view, then that would seem to suggest that there is a missing dynamics for the individual element of the ensemble. In light of experiments such as EPR, it would appear that this missing dynamics for the single system would have to be nonlocal.
 
  • Like
Likes Mentz114
  • #280
stevendaryl said:
this missing dynamics for the single system would have to be nonlocal.
Yes. The missing dynamics is that of the environment.

In all descriptions of Bell-like experiments, the very complex environment (obviously nonlocal, since it is the remainder of the universe) is reduced to one single act - the collapse of the state. Thus even if the universe evolves deterministically, ignoring the environment of a tiny system to this extent is sufficient cause for turning the system into a random one. (The statistical mechanics treatment in the review paper that I cited and you found too long to study tries to do better than just postulating collapse.)

It is our (for reasons of tractability) very simplified models of something that is in reality far more complex that leads to the nondeterminism of the tiny subsystem getting our attention. This is not really different from taking a classical multiparticle system and then considering the dynamics of a subsystem alone - it cannot be deterministic. Take the system of a protein molecule and a drug molecule aimed at blocking it. If you assume a deterministic model for the complete system (using molecular dynamics) to be the true dynamics, and the active sites of both molecules as the reduced system, with the remainder of the molecules assumed rigid (which is a reasonable simplified description), you'll find that the reduced system dynamics (computed from the projection operator formalism) will have inherited randomness from the large system, although the latter is deterministic.

stevendaryl said:
So the real weirdness of quantum mechanics is that we have a nondeterminism that doesn't seem to be due to lack of information about the details of the present state.

No. The real weirdness is that people discuss quantum foundations without taking into account the well-known background knowledge about chaotic systems. They take their toy models for the real thing, and are surprised that there remains unexplained ''irreducible'' randomness.
 
Last edited:
  • Like
Likes Jando
  • #281
stevendaryl said:
If we assume (as Einstein did) that causal influences propagate at lightspeed or slower, then [...] we don't need to know what conditions are like everywhere, just in the backward lightcone of where we are trying to make a prediction.

But we need to know the complete details of the universe in the backward light cones with apex at the spacetime positions at which e measure. This means all the details of the preparation and transmission, including all the details of the preparation equipment and the transmission equipment. For a nonlocal experiment over 1 km, the two backward lightcones span at the time of the preparation of the common signal a spherical region of at least this size, which is a huge nonlocal system on all of whose details the prediction at the final two points may depend.

Thus to ''know what conditions are like just in the backward lightcone'' is a very formidable task, as any lack of detail in our model of what we assume in this light cone contributes to the nondeterminism. You dismiss this task with the single word ''just''.

Not a single paper I have seen takes this glaring loophole into account.
 
Last edited:
  • #282
A. Neumaier said:
Yes. The missing dynamics is that of the environment.

In all descriptions of Bell-like experiments, the very complex environment (obviously nonlocal, since it is the remainder of the universe) is reduced to one single act - the collapse of the state. Thus even if the universe evolves deterministically, ignoring the environment of a tiny system to this extent is sufficient cause for turning the system into a random one. (The statistical mechanics treatment in the review paper that I cited and you found too long to study tries to do better than just postulating collapse.)

Well, that's interesting, but surely that's not a standard view, that the apparent nondeterminism of QM would is resolved by ignored details of the rest of the universe?
 
  • #283
stevendaryl said:
Well, that's interesting, but surely that's not a standard view, that the apparent nondeterminism of QM would be resolved by ignored details of the rest of the universe?
None of my views on the foundations of quantum mechanics, as argued in this thread, is standard. Does it matter? It resolves or at least greatly reduces all quantum mysteries - only that's what matters.

In the past, I had spent a lot of time (too much for the gains I got) studying in detail the available interpretations of QM and found them wanting. Then I noticed more and more small but important things that people ignore routinely in foundational matters although they are discussed elsewhere:

  • It is fairly well known that real measurements are rarely von Neumann measurements but POVM measurement. Nevertheless, people are content to base their foundations on the former.
  • It is well-known that real systems are dissipative, and it is known that these are modeled in the quantum domain by Lindblad equations (lots of quantum optics literature exists on this). Nevertheless, people are content to base their foundations on a conservative (lossless) dynamics.
  • It is well-known how dissipation results from the interaction with the environment. Nevertheless, people are content to ignore the environment in their foundations. (This changed a little with time. There is now often a lip service to decoherence, and also often claims that it settles things when taken together with the traditional assumptions. It doesn't, in my opinion.)
  • It is known (though less well-known) that models in which the electromagnetic field is treated classically and only the detector is quantized produce exactly the same Poisson statistics for photodetection as models employing a quantum field in a coherent state. This conclusively proves that the detector signals are artifacts produced by the detector and cannot be evidence of photons (since they are completely absent in the first model). Nevertheless, people are content to treat in their foundations detector signals as proof of photon arrival.
  • It is well-known that the most fundamental theory of Nature is quantum field theory, in which particles are mere field excitations and not the basic ontological entities. Nevertheless, people are content to treat in their foundations quantum mechanics in terms of particles.
Taken together, I could no longer take seriously the main stream foundational studies, and lost interest in them. Instead, an alternative view formed in my vision and became more and more comprehensive with time. Where others saw weirdness I saw lack of precision in the arguments and arguments with too simplified assumptions, and I saw different ways of phrasing in ordinary language exactly the same math that underlies the standard, misleading language.

This being said, let me finally note that it is well-known that decoherence turns pure states into mixed states. Since pure states form a deterministic quantum dynamics, this shows that, for purely mathematical reasons - and independent of which ontological status one assigns to the wave function - accounting for the unmodelled environment produces statistical randomness in addition to the alleged irreducible quantum randomness inherent in the interpretation of the wave function. Thus, to answer your question,
stevendaryl said:
surely that's not a standard view, that the apparent nondeterminism of QM would be resolved by ignored details of the rest of the universe?
I conclude that it is a standard view that ignoring details of the rest of the universe introduces additional nondeterminism. The only nonstandard detail I am suggesting is that the same mechanism that is already responsible for a large part of the observed nondeterminism (all of statistical mechanics is based on it) can as well be taken to be responsible for all randomness. Together with shifting the emphasis from the wave function (a mathematical tool) to the density matrix (a matrix well-known to contain the physical information, especially the macroscopic, classical one), all of a sudden many things make simple sense. See my post #257 and its context.
Those who believe in the power of Occam's razor should therefore prefer my approach. It also removes one of the philosophical problems of quantum mechanics - to give irreducible randomness an objective meaning.
 
  • #284
A. Neumaier said:
None of my views on the foundations of quantum mechanics, as argued in this thread, is standard. Does it matter? It resolves or at least greatly reduces all quantum mysteries - only that's what matters.

  • It is fairly well known that real measurements are rarely von Neumann measurements but POVM measurement. Nevertheless, people are content to base their foundations on the former.
  • It is well-known that real systems are dissipative, and it is known that these are modeled in the quantum domain by Lindblad equations (lots of quantum optics literature exists on this). Nevertheless, people are content to base their foundations on a conservative (lossless) dynamics.
  • It is well-known how dissipation results from the interaction with the environment. Nevertheless, people are content to ignore the environment in their foundations. (This changed a little with time. There is now often a lip service to decoherence, and also often claims that it settles things when taken together with the traditional assumptions. It doesn't, in my opinion.)
  • It is known (though less well-known) that models in which the electromagnetic field is treated classically and only the detector is quantized produce exactly the same Poisson statistics for photodetection as models employing a quantum field in a coherent state. This conclusively proves that the detector signals are artifacts produced by the detector and cannot be evidence of photons (since they are completely absent in the first model). Nevertheless, people are content to treat in their foundations detector signals as proof of photon arrival.
  • It is well-known that the most fundamental theory of Nature is quantum field theory, in which particles are mere field excitations and not the basic ontological entities. Nevertheless, people are content to treat in their foundations quantum mechanics in terms of particles.
I agree with all of that, but I'm not at all convinced that taking into account all of that complexity makes any difference. There is a reason that discussions of Bell's inequality and other foundational issues use simplified models, and that is that reasoning about the more realistic models is much more difficult. The assumption is that if we can understand what is going on in the more abstract model, then we can extend that understanding to more realistic models. It's sort of like how when Einstein was reasoning about SR, he used idealized clocks and light signals, and didn't try to take into account that clocks might be damaged by rapid acceleration, or that the timing of arrival of a light signal may be ambiguous, etc. To make the judgment that a simplified model captures the essence of a conceptual problem is certainly error-prone, and any conclusion someone comes to is always eligible to be re-opened if someone argues that more realistic details would invalidate the conclusion.

But in the case of QM, I really don't have a feeling that any of the difficulties with interpreting QM are resolved by the complexities you bring up. It seems to me, on the contrary, that the complexities can't possibly resolve them in the way you seem to be suggesting.

Whether it's QM or QFT, you have the same situation:
  • You have an experiment that involves a measurement with some set of possible outcomes: o_1, o_2, ..., o_N
  • You use your theory to predict probabilities for each outcome: p_1, p_2, ..., p_N
  • You perform the measurement and get some particular outcome: o_j
  • Presumably, if you repeat the measurement often enough with the same initial conditions, the relative frequency of getting o_j will approach p_j. (If not, your theory is wrong, or you're making some error in your experimental setup, or in your calculations, or something)
What you seem to be saying is that the outcome o_j is actually determined by the details you left out of your analysis. That seems completely implausible to me, in light of the EPR experiment (unless, as in Bohmian mechanics, the details have a nonlocal effect). In EPR, Alice and Bob are far apart. Alice performs a spin measurement along a particular axis, and the theory says that she will get spin-up with probability 1/2 and spin-down with probability 1/2. It's certainly plausible, considering Alice's result in isolation, that the details of her measuring device, or the electromagnetic field, or the atmosphere in the neighborhood of her measurement might affect the measurement process, so that the result is actually deterministic, and the 50/50 probability is some kind of averaging over ignored details. But that possibility becomes completely implausible when you take into account the perfect anti-correlation between her result and Bob's. How do the details of Bob's device happen to always produce the opposite effect of the details of Alice's device?

I understand that you can claim that in reality, the anti-correlation isn't perfect. Maybe it's only 90% anti-correlation, or whatever. But that doesn't really change the implausibility much. In those 90% of the cases where they get opposite results, it seems to me that either the details of Bob's and Alice's devices are irrelevant, or that mysteriously, the details are perfectly matched to produce opposite results. I just don't believe that that makes sense. Another argument that it can't be the details of their devices that make the difference is that it is possible to produce electrons that are guaranteed to be spin-up along a certain axis. Then we can test whether Alice always gets spin-up, or whether the details of her measuring device sometimes convert that into spin-down. That way, we can get an estimate as to the importance of those details. My guess is that they aren't important, but I need somebody who knows about experimental results to confirm or contradict that guess.

So if the ignored, microscopic details of Alice's and Bob's devices aren't important (and I just don't see how they plausibly can be), that leaves the ignored environment: the rest of the universe. Can details about the rest of the universe be what determines Alice's and Bob's outcomes? To me, that sounds like a hidden-variables theory of exactly the type that Bell tried to rule out. The hidden variable \lambda in his analysis just represents any details that are common to Alice's and Bob's measurements. The common environment would certainly count. Of course, Bell's proof might have loopholes that haven't been completely closed. But it seems very implausible to me.

What I would like to see is some kind of simulation of the EPR experiment in which the supposed nondeterminism is actually resolved by the ignored details. That's what would convince me.
 
  • #285
stevendaryl said:
Another argument that it can't be the details of their devices that make the difference is that it is possible to produce electrons that are guaranteed to be spin-up along a certain axis. Then we can test whether Alice always gets spin-up, or whether the details of her measuring device sometimes convert that into spin-down. That way, we can get an estimate as to the importance of those details. My guess is that they aren't important, but I need somebody who knows about experimental results to confirm or contradict that guess.
If the input is all-spin-up and the measurement tests for spin-up, the result will be deterministic independent of the details of the detector. But if the input is all spin-up and the measurement tests in another direction, the random result will be produced by the detector. Both can be seen by considering a model that inputs a classical polarized field and uses a quantum detector sensitive to the polarization direction.
stevendaryl said:
So if the ignored, microscopic details of Alice's and Bob's devices aren't important (and I just don't see how they plausibly can be), that leaves the ignored environment: the rest of the universe.
The ignored environment includes the microscopic details of Alice's and Bob's devices and how they were influenced by the common past. As I haven't done the calculations (remember the report I linked to needed 150 pages to make the case in the particular models studied there) I cannot tell what would be the mathematical result but I suspect it would just give what is actually observed.

But you are mixing two topics that should be kept separate - the question of whether perfect anticorrelations can be explained classically, and the question of whether quantum randomness can be explained by restricting the deterministic quantum dynamics of the universe. Deterministic is far from equivalent with classical and/or Bell-local! Therefore these are very different questions.

The quantum mechanical correlations observed in a tiny quantum system come from the quantum mechanical dynamics of the density matrix of the universe - there is nothing classical in the latter, hence one shouldn't expect that restriction to a tiny subsystem would be classical. On the contrary, all we know about the many actually studied subsystems of slightly larger quantum systems indicates that one gets exactly the usual quantum descriptions of the isolated subsystem, plus correction terms that account for additional randomness - decoherence effects, etc.. There is no ground at all to think that this should becomedifferent when the systems get larger, and ultimately universe-sized.
 
  • #286
A. Neumaier said:
The ignored environment includes the microscopic details of Alice's and Bob's devices and how they were influenced by the common past.

But it seems to me that the perfect anti-correlations imply that the details of Alice's and Bob's devices AREN'T important. Alice can independently fool with the details of her device, and that won't upset the perfect anti-correlations with Bob's measurement.

But you are mixing two topics that should be kept separate - the question of whether perfect anticorrelations can be explained classically, and the question of whether quantum randomness can be explained by restricting the deterministic quantum dynamics of the universe. Deterministic is far from equivalent with classical and/or Bell-local! Therefore these are very different questions.

Yes, I agree that they are different questions, but as I said, I find the idea that quantum nondeterminism can be explained through ignored details about the rest of the universe to be sufficiently like the classical case that I am very dubious that it can be made to work. There are more exotic variants of this idea, which is the Bohmian approach (the extra details to resolve the nondeterminism are nonlocal) or the retrocausal approach (the extra details are found in the future, not in the present). But I find it very implausible that extra details about the causal past can possibly explain the nondeterminism. As I said, it would take a simulation (or a calculation, if I could follow it) to convince me of such a resolution. I am not a professional physicist, so I don't have the qualifications or knowledge to state this with certainty, but it seems to me that your suggestion might be provably impossible.
 
  • #287
stevendaryl said:
I am not a professional physicist, so I don't have the qualifications or knowledge to state this with certainty, but it seems to me that your suggestion might be provably impossible.

To me it seems most likely, of course unless we're drifting into a superdeterministic interpretation which is a feeling I'm getting.

Also I'm still not even clear what exactly is being argued: the idealized model was rejected without any attempt at reframing it in this new view so we didn't get any good look at it.

Not to be a bore but I think, to get at anything conclusive, the best bet is to go for last year's loophole-free experimental test. It is a realistic example and the setup is, after all, relatively simple. The problem is, honestly, we don't really believe in this idea, the burden of proof doesn't lie in the accepted framework (however unfair it may appear to be).
 
  • #288
ddd123 said:
To me it seems most likely, of course unless we're drifting into a superdeterministic interpretation which is a feeling I'm getting.

Yeah, well, superdeterminism is very irksome for philosophical and scientific reasons, but sometimes I wonder it it really is the answer. We think of the choices we make (about whether to measure this or that) as freely chosen, but since we are physical systems, obeying the same laws of physics as electrons, at some level, we no more choose what we do than an electron does.
 
  • #289
stevendaryl said:
Yeah, well, superdeterminism is very irksome for philosophical and scientific reasons, but sometimes I wonder it it really is the answer. We think of the choices we make (about whether to measure this or that) as freely chosen, but since we are physical systems, obeying the same laws of physics as electrons, at some level, we no more choose what we do than an electron does.

I don't think that's the problem. It's that the superdeterministic law would have to be concocted specifically to counter our fiddling with the instruments. It's more anthropocentric, not less, imho.
 
  • #290
ddd123 said:
I don't think that's the problem. It's that the superdeterministic law would have to be concocted specifically to counter our fiddling with the instruments. It's more anthropocentric, not less, imho.

I think that depends on the details of the superdeterministic theory. Just saying that there is a conspiracy is pretty worthless, but if someone could give a plausible answer to how the conspiracy is implemented, it might not be objectionable.
 
  • #291
stevendaryl said:
I think that depends on the details of the superdeterministic theory. Just saying that there is a conspiracy is pretty worthless, but if someone could give a plausible answer to how the conspiracy is implemented, it might not be objectionable.

Yes, I really can't imagine how that could be though. If such a theory ends up being non-magical-looking, wouldn't it be just a local realistic one, and thus nonexistent?
 
  • #292
ddd123 said:
Yes, I really can't imagine how that could be though. If such a theory ends up being non-magical-looking, wouldn't it be just a local realistic one, and thus nonexistent?

No. What Bell ruled out was the possibility of explaining the outcome of the EPR experiment by a function of the form:

P(A\ \&\ B\ |\ \alpha, \beta) = \sum_\lambda P(\lambda)P(A\ |\ \lambda, \alpha) P(B\ |\ \lambda, \beta)

A superdeterministic theory would modify this to
P(A\ \&\ B\ |\ \alpha, \beta) = [ \sum_\lambda P(\lambda)P(A\ \& \alpha |\ \lambda) P(B\ \& \beta |\ \lambda)]/P(\alpha\ \&\ \beta)

Alice and Bob's settings \alpha and \beta would not be assumed independent of \lambda. That's a different assumption, and the fact that the former is impossible doesn't imply that the latter is impossible.
 
  • #293
stevendaryl said:
I find the idea that quantum nondeterminism can be explained through ignored details about the rest of the universe to be sufficiently like the classical case that I am very dubious that it can be made to work.
I don't think this can be made to work.

But you misunderstood me. I am only claiming the first part, ''that quantum nondeterminism can be explained through ignored details about the rest of the universe'', but not that it makes the explanation sufficiently classical. It makes the explanation only deterministic, which for me is something completely different. Nevertheless it is a step forward. Unlike Bohmian mechanics it needs not the slightest alterations to the quantum formalism.
 
  • #294
ddd123 said:
the idealized model was rejected without any attempt at reframing it
So far I didn't discuss it in detail only because I didn't get so far the requested reassurance that there wouldn't be any further shifting of ground like ''but I had intended ...'', or ''but there is another experiment where ...'', or ''but if you modify the setting such that ...'', where ... are changes in the precise description for which my analysis (of adequacy to the real world, and of similarity to classical situations) would no longer be appropriate.

Once it is clear which absolutely fixed setting is under discussion, with all relevant details, assumptions, and arguments for its weirdness fully spelled out, I'll discuss the model.
 
Last edited:
  • #295
A. Neumaier said:
I don't think this can be made to work.

But you misunderstood me. I am only claiming the first part, ''that quantum nondeterminism can be explained through ignored details about the rest of the universe'', but not that it makes the explanation sufficiently classical.

Well, regardless of whether it's classical or not, I don't believe that it is possible without "exotic" notions of ignored details (such as those that work backward in time or FTL).

It makes the explanation only deterministic, which for me is something completely different. Nevertheless it is a step forward. Unlike Bohmian mechanics it needs not the slightest alterations to the quantum formalism.

Well, if it works. That's what I find doubtful. Quantum mechanics through the Born rule gives probabilities for outcomes. For pure QM to give deterministic results means that the evolution of the wave function, when you take into account all the details of the environment, makes every probability go to either 0 or 1. That does not seem consistent with the linearity of quantum mechanics. If you have a wave function for the whole universe that represents Alice definitely getting spin-up, and you have a different wave function that represents Alice definitely getting spin-down, then the superposition of the two gives a wave function that represents Alice in an indeterminate state. So to me, either you go to Many Worlds, where both possibilities occur, or you go to something beyond pure QM, such as Bohm or collapse.
 
  • #296
stevendaryl said:
when you take into account all the details of the environment, makes every probability go to either 0 or 1. That does not seem consistent with the linearity of quantum mechanics.
Quantum mechanics is linear (von Neumann equation ##i \hbar \dot\rho=[H,\rho]##) only in the variables ##\rho## that we do not have experimental access to when the system has more than a few degrees of freedom (i.e., when a measuring device is involved). But it is highly nonlinear and chaotic in the variables that are measurable.

This can be seen already classically.
The analogue of the von Neumann equation for a classical multiparticle system is the Liouville equation ##\dot\rho=\{\rho,H\}##, and is also linear. But it describes faithfully the full nonlinear dynamics of the classical multiparticle system! The nonlinearities appear once one interprets the system in terms of the observable variables, whehe one gets through the nonlinear BBGKY hierarchy the nonlinear Boltzmann equation of kinetic theory and the nonlinear Navier-Stokes equations of hydrodynamics.

Similarly, one can derive the nonlinear Navier-Stokes equations of hydrodynamics also from quantum mechanics.

Note also that many of the technical devices of everyday live that produce discrete results and change in a discrete fashion are also governed by nonlinear differential equations. It is well-known how to get bistability in a classical dissipative system from a continuous nonlinear dynamics involving a double well potential! There is nothing mysterious at all in always getting one of two possible definite discrete answers in a more or less random fashion from a nonlinear classical dynamics, which becomes a linear dynamics once formulated (fully equivalently) as a dynamics of phase space functions, which is the classical analogue (and classical limit) of the linear Ehrenfest equation for quantum systems.
 
  • #297
stevendaryl said:
without "exotic" notions of ignored details
The notion of ignored details I am referring to is nothing exotic at all but technically precisely the same routinely applied in the projection operator technique for deriving the equations for a reduced description. It is a very standard technique from statistical mechanics that can be applied (with a slightly different setting in each case) to a variety of situations, and in particular to the one of interest here (contraction of a quantum Liouville quation to a Lindblad equation for a small subsystem). The necessary background can be found in a book by Grabert. (Sorry, again more than a few pages only.)
 
  • #298
A. Neumaier said:
Quantum mechanics is linear (von Neumann equation ##i \hbar \dot\rho=[H,\rho]##) only in the variables ##\rho## that we do not have experimental access to when the system has more than a few degrees of freedom (i.e., when a measuring device is involved). But it is highly nonlinear and chaotic in the variables that are measurable.

As I said, what I would like to see is a demonstration (simulation, or derivation) that the evolution equations of QM (or QFT) lead to (in typical circumstances) selection of a single outcome out of a set of possible outcomes to a measurement. Is there really any reason to believe that happens? I would think that there is not; as a matter of fact, I would think that somebody smarter than me could prove that it doesn't happen. I'm certainly happy to be wrong about this.
 
  • #299
stevendaryl said:
I would like to see is a demonstration (simulation, or derivation) that the evolution equations of QM (or QFT) lead to (in typical circumstances) selection of a single outcome out of a set of possible outcomes
It is nothing particularly demanding, just a lot of technical work to get it right - like every detailed derivation in statistical mechanics. If I find the time I'll give a proper derivation - but surely not in the next few days, as it is the amount of work needed for writing a research paper.

Therefore I had pointed to an analogous result for a classical bistable potential. A 2-state quantum system (elecron with two basis states ''bound'' and ''free'', the minimal quantum measurement device) behaves qualitatively very similarly.
 
  • #300
A. Neumaier said:
Therefore I had pointed to an analogous result for a classical bistable potential. A 2-state quantum system (elecron with two basis states ''bound'' and ''free'', the minimal quantum measurement device) behaves qualitatively very similarly.

I understand how bistable potentials can be similar in some respects, but I don't think that works for distant correlations such as EPR. That's the demonstration that I would like to see: show how tiny details cause Alice and Bob to get definite, opposite values in the case where they are measuring spins along the same direction.
 

Similar threads

Replies
8
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 21 ·
Replies
21
Views
2K
  • Sticky
  • · Replies 0 ·
Replies
0
Views
8K
  • · Replies 36 ·
2
Replies
36
Views
6K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
Replies
12
Views
1K
  • · Replies 4 ·
Replies
4
Views
7K
  • · Replies 2 ·
Replies
2
Views
2K