I Quantum mechanics is not weird, unless presented as such

  • #251
zonde said:
I'm not stevendaryl and I'm not sure if it matter but I say yes to your question with one correction to stevendaryl's explanation: statistics in point 5. are given for 3 settings case (as in picture).

@Neumaier: I would answer yes too, if you assume Alice and Bob's experimental regions are spacelike separated. Steveandaryl is implicitly assuming that I think, but I wouldn't want to disappoint after your attempt if you assumed timelike instead.
 
Physics news on Phys.org
  • #252
A. Neumaier said:
Why is material existence absent when there is a mass density? Classically, in classical elasticity theory (which governs the behavior of all solids of our ordinary experience) and hydrodynamics (which governs the behavior of all liquids and gases of our ordinary experience), all you have about material existence is the mass density - unless you go into the microscopic domain where classical descriptions are not applicable.

Thanks for expanding on this. Let me chew on this for a bit. My classical intuition tends to equate material with solid, and solid with particle existence. I think I've got to change that way of thinking about things.

Let me ask another question for now though. Earlier in your thread, you differentiated between quantum information theory and quantum field theory, but I can't find the post at the moment. In your view, is there a fundamental difference between these to schools of thought that is easily explained. (Hopefully something more enlightening than one refers to information and the other refers to fields. )
 
  • #253
Feeble Wonk said:
is there a fundamental difference between these to schools of thought that is easily explained.
In quantum information theory - in sharp contrast to quantum field theory -, all Hilbert spaces are finite dimensional, all spectra discrete, thee is no scattering, and canonical commutation rules are absent. No functional analysis is needed to understand it.
 
Last edited:
  • #254
A. Neumaier said:
Those who want to see that quantum mechanics is not at all weird (when presented in the right way) but very close to classical mechanics should read instead my online book Classical and Quantum Mechanics via Lie algebras. (At least I tried to ensure that nothing weird entered the book.)

I am as layman as layman can get, but I got a hunch the other day that classical- and quantummechanics are in some basic way(s) similar. However, I'll keep it with that hunch. I hope though you have a point there! :wink: :woot:
 
  • #255
A. Neumaier said:
My book has a far more modest goal - to show how close quantum mechanics can be to classical mechanics
I guess you mean classical, nonrelativistic mechanics?

(both formally and in its interpretation) without losing the slightest substance of the quantum description, but removing much (not all) of its weirdness.
The perceived weirdness in the nonrelativistic case is mostly confined to the features of superposition and indeterminacy.

But the more challenging aspects of weirdness are in the relativistic context, where explanations in terms of local hidden (classical) variables are pretty much ruled out.

Still, emphasizing the commonalities in disparate branches of physics by explaining them in terms of functionals over algebras is worthwhile. Even though I have physics and maths degrees, I did not think of things this way until you pointed it out (many years ago now).
 
  • #256
A. Neumaier said:
For a 2-level system, the density matrix is a matrix expressible in terms of four definite real numbers, which is not so different from a classical phase space position that takes 6 real numbers for its description. There are are also classical observables with matrix shape, such as the inertia tensor of a rigid body. The density matrix is analogous.

For more complex quantum systems, the number of definite real numbers needed to fix the state is bigger (or even infinite), but this is also the case in classical complex objects or fields. Thus the ontology of physical reality is as real as one can have it in a formal model of reality.

Response probabilities can be determined from the density matrix, but one can also determine response probabilities from classical chaotic systems. This therefore has nothing to do with the underlying ontology.

This has me pondering the ontology of the the density matrix vs that of the state vector.
http://arxiv.org/pdf/1412.6213v2.pdf
I'm confident that you are familiar with this paper, or the general argument at least. I'm curious what your impression is on this issue, and how you see the ontological relationship of the state vector, reduced state vector, density matrix, etc.
 
  • #257
Feeble Wonk said:
the ontological relationship of the state vector, reduced state vector, density matrix, etc.
In my view, state vectors are abstract mathematical tools, relevant in practice only for systems with few discrete degrees of freedom (such as spins, energy levels, or polarizations) that can be prepared in a pure state, and where all other degrees of freedom are projected out. Thus they have no ontological status in the physical world but are useful as abbreviated descriptions of these particular systems.

The typical state of a system realized in Nature is given by a density matrix. A density matrix is well-behaved under restriction to a subsystem, and hence can be used to describe systems of any size. In particular, it is consistent to consider each density matrix of a system in our universe as a restriction of the density matrix of the universe.

I postulate that the latter (described by a quantum field theory that we don't know yet in detail) is objectively existent in the sense of realism, and objectively determines the density of everything in the universe, and hence in any part of it. As a consequence, the density matrix of any subsystem that can be objectively delineated from the rest of the universe is also objective (though its dynamics is partially uncertain and hence stochastic, since the coupling to the environment - the remaining universe - is ignored).

On the other hand, our human approximations to these density matrices are subjective since they depend on how much we know (or postulate) about the system. They are only as good as the extent to which they approximate the true, objective density matrix of the system.

For example, a cup of water left alone is after a while in a state approximately described by a density matrix of the form discussed in statistical thermodynamics. This has the advantage that the density matrix can be described by a few parameters only. This suffices to determine its macroscopic properties, and hence is used in practice although the true density matrix is slightly different and would account for tiny, practically irrelevant deviations from thermodynamics.

The more detailed a state description is the more parameters are needed to describe it since a quantum field has infinitely many degrees of freedom in any extended region of space. For more, read Chapter 10 of my book linked to in post #2.
 
Last edited:
  • Like
Likes Mentz114
  • #258
I just want to chime into say thanks the contributors (especially A. Neumaier) here. It's a very interesting read. I'm not qualified to contribute to the debate but can understand it.

It's good to see a discussion about a concept again. Thanks.
 
  • Like
Likes bhobba
  • #259
stevendaryl said:
You have a source of some unknown kind of signal that periodically sends a pair of signals
This cannot be done for quantum signals. The standard experimental settings (of which the present one seems to be an abstraction) produce signals at random times.

stevendaryl said:
Each time the source sends its signals, exactly one of Alice's LEDs light up, and exactly one of Bob's LED's light up.

How can one perform such an experiment? You need to take into account losses due to unavoidable imperfections. Already a 40% photo detection efficiency is considered high! If one acknowledges that in the description of the experiment, things don't look quite that spectacular.

I am still waiting for your reply to this post.
 
Last edited:
  • #260
It's idealized, but that loophole free Bell test I mentioned earlier is real and the strangeness is intact.
 
  • #261
ddd123 said:
that loophole free Bell test I mentioned earlier is real and the strangeness is intact.
But (like everything in the context of Bell's theorem) it's phrased in terms of particles. I liked stevendaryl's attempt to remove every reference to particles. Unfortunately his particular choices dramatically magnify the weirdness by using highly unrealistic assumptions.
 
  • #262
A. Neumaier said:
This cannot be done for quantum signals. The standard experimental settings of which the present seems to be an abstraction produce signals at random times.

How can one perform such an experiment? You need to take into account losses due to unavoidable imperfections. Already a 40% photo detection efficiency is considered high! If one acknowledges that in the description of the experiment, things don't look quite that spectacular.
stevendaryl's example can be used to analyze QM model. It is not quite that useful to analyze real experiments like these most recent ones:
http://arxiv.org/abs/1508.05949
http://arxiv.org/abs/1511.03189
http://arxiv.org/abs/1511.03190
You are right that with 40% detection efficiency local models are not ruled out. But in two photon experiments mentioned above they have achieved system efficiencies (across all setup) around 75% and they use superconductors based detectors with efficiency higher than 90%.
To avoid random signal time they use pulsed lasers (they have to account for cases of two photon pairs in single pulse).
And the other experiment uses electrons that are entangle via entanglement swapping. So detection processes are macrocopically distinct and determined and with 100% detection efficiency. But it analyzes only subensemble. However this does not opens any (known) loopholes as decision about inclusion into subensemble is made at third location that is spacelike separated from both detection processes (that are performed in any case).
 
  • #263
zonde said:
To avoid random signal time they use pulsed lasers (they have to account for cases of two photon pairs in single pulse).
And the other experiment uses electrons that are entangle via entanglement swapping. So detection processes are macrocopically distinct and determined and with 100% detection efficiency. But it analyzes only subensemble.
In both cases there is still significant residual randomness in the timing: In the first case due to 10-25% missed photons, and in the second case since the selection of the subensemble introduces randomness.
 
  • #264
A. Neumaier said:
But (like everything in the context of Bell's theorem) it's phrased in terms of particles. I liked stevendaryl's attempt to remove every reference to particles. Unfortunately his particular choices dramatically magnify the weirdness by using highly unrealistic assumptions.

It's a real experiment, not a theory, they did it, you can rephrase it if you want. I contend that it wouldn't change much, but I'm open to possibilities.
 
  • #265
ddd123 said:
It's a real experiment, not a theory, they did it, you can rephrase it if you want.
I won't rephrase it myself. If you want me to discuss it, describe it in a similar way as stevendaryl without mentioning particles but including all details that in your opinion are necessary and makes the outcome look weird. And, for the sake of easy reference, please add to the post describing your setting the reference to the paper you took as blueprint. Then I'll give an analysis from my point of view.
 
  • #266
That's too much work for me, but if I were you (that is, convinced of the possibility of eliminating the weirdness by rephrasing) since this is the crux of the whole matter and arguably the only irreducible weirdness in QM I would try it. That or other similar loophole-free tests you prefer. Otherwise it's just a dogma and I wouldn't feel at ease with it. But to each his own I guess.
 
  • #267
A. Neumaier said:
This cannot be done for quantum signals. The standard experimental settings (of which the present one seems to be an abstraction) produce signals at random times.

Fair enough. But is this an important point in understanding EPR-type experiments, or is it just a complication that makes it messier to reason about?

How can one perform such an experiment? You need to take into account losses due to unavoidable imperfections. Already a 40% photo detection efficiency is considered high! If one acknowledges that in the description of the experiment, things don't look quite that spectacular.

Same question. I have heard of attempts to get around Bell's inequality by taking advantage of detector inefficiencies and noise, but I thought that such loopholes were not considered very promising in light of recent experiments

I am still waiting for your reply to this post.

I'm not sure I can give a definitive answer ahead of time. The way that such arguments go is:

"Look, here's a classical situation that bears some similarity with EPR."

"Yes, but that situation differs from EPR in these important ways, so I don't see why that analogy is helpful..."

I suppose that such a back-and-forth dialog could at least refine the exact sense in which EPR is weird, compared to analogous classical situations.
 
  • #268
A. Neumaier said:
In both cases there is still significant residual randomness in the timing: In the first case due to 10-25% missed photons
You would have to examine derivations for CH and Eberhard inequalities if you want to be sure that 75% efficiency is enough. They are using particle concept of course but at least Eberhard inequality can be rewritten without particles if you allow some form of counterfactual reasoning (it will apply to any model of reality but not exactly to reality itself).
A. Neumaier said:
in the second case since the selection of the subensemble introduces randomness.
Does this introduces some loophole? As far as I have analyzed it this does not change anything.

I would like to emphasize that question whether reality is local is rather much harder. But it is much more easier to ask if QM is local as we can use idealized predictions and counterfactual reasoning. And I suppose that stevendaryl was trying to address weirdness of QM and not exactly weirdness of reality.
 
  • #269
But, as I understood him, Neumaier doesn't really want to recover local realism, simply find classical analogues of phenomena in some way that makes the absence of local realism look reasonable enough.
 
  • #270
stevendaryl said:
I'm not sure I can give a definitive answer ahead of time.
I just want to make sure that your model won't change during the discussion. For, years ago, I had wasted a lot of time in similar discussions where when I made a point on some scenario, the reply was '' but this doesn't explain ...'', where ''...'' was a different setting. One can never satisfy such participants in a discussion.

It is a different matter when you find whatever explanation I can give insufficiently convincing for explaining your particular setting. In this case, we may differ in what is sufficiently convincing, but at least we are not shifting grounds, and the argument will have bounded length.

stevendaryl said:
Yes, but that situation differs from EPR in these important ways
If you replace ''differs from EPR'' by ''differs from the setting in post #234'', this kind of arguments are constructive. If we have to argue what was the real intention of EPR, is becomes endless.

stevendaryl said:
Same question.
My comment was intended to convey that your setting becomes more convincing (and trying to explain it more attractive to me) if you drop 'periodically' or replace it by 'random', and if you don't insist on perfect correlations but on high correlations. My analysis will surely not depend on the particular value of the thresholds. I'd appreciate if you'd edit your post #234 accordingly, so that it still displays what you find weird but is closer to reality.
 
  • #271
zonde said:
question whether reality is local is rather much harder
I don't think reality is local in Bell's sense. It is local in the sense of QFT, but these are completely different concepts.

But I also don't think that nonlocality alone makes QM weird but only nonlocality together with poor classical language for quantum phenomena.
 
  • Like
Likes vanhees71
  • #272
A. Neumaier said:
I don't think reality is local in Bell's sense. It is local in the sense of QFT, but these are completely different concepts.
I am trying not to get lost in all the different locality concepts. So I will hold on to this concept: Measurement result at one location is not influenced by (measurement) parameters at other spacelike distant location.
But what is local in QFT sense?

A. Neumaier said:
But I also don't think that nonlocality alone makes QM weird but only nonlocality together with poor classical language for quantum phenomena.
If you take nonlocality as some FTL phenomena then it s not so weird. On the other hand if you take nonlocality as a totally novel philosophical concept like "distance is illusion" then it's totally weird and incompatible with (philosophical) realism.
Speaking about classical language I think that problem is in lack of common agreement what classical concepts can be reviewed and which ones are rather fundamental to science.
Say particles is just a model so it can be reviewed. But you have to demonstrate that you can recover predictions from particle based models or recover particles at some limit.
 
  • #273
zonde said:
But you have to demonstrate that you can recover predictions from particle based models or recover particles at some limit.
This had already been demonstrated long before the advent of quantum mechanics. There is a well-known way to recover particles from fields called geometric optics. The particle concept is appropriate (and conforms with the intuition about classical particles) precisely when the conditions for approximating wave equations by geometric optics are applicable.

zonde said:
what is local in the QFT sense?
It means: ''observable field quantities ##\phi(x_1),\ldots,\phi(x_n)## commute if their arguments are mutually spacelike.'' This is the precise formal definition.
As a consequence (and, conversely, as an informal motivation for this condition), these quantities can (at least in principle) be independently prepared.

It is not a statement about measurement, which even in the simplest case is a complicated statistical many-particle problem, since a permanent record must be formed through the inherent quantum dynamics of system+measuring device+environment.

That the traditional quantum foundations take a human activity, the measurement process, as fundamental for the foundations is peculiar to quantum mechanics and part of the reason why the interpretation in these terms leads to many weird situations.
 
Last edited:
  • Like
Likes vanhees71
  • #274
A. Neumaier said:
these quantities can (at least in principle) be independently prepared.
Note that unlike measurement, which didn't exist before mankind learned to count, preparation is not primarily a human activity but something far more objective.

Nature itself prepares all the states that can actually be found in Nature - water in a state of local equilibrium, alpha, beta and gamma-rays, chiral molecules in a left-handed or right-handed state rather than their superposition, etc. - without any special machinery and without any human being having to do anything or to be around. While we can prepare something only if we know Nature well enough to control these preparation processes. That's the art of designing experiments.
 
Last edited:
  • #275
If I have understood your position accurately, you've suggested that all observables have a definite state at all times regardless of whether they are in principle measurable/observable. So, I assume that in your conception of the yet to be fully developed quantum field theory, the unitary evolution of the cosmological quantum field is entirely deterministic. Yes?
 
  • #276
Feeble Wonk said:
the unitary evolution of the cosmological quantum field is entirely deterministic. Yes?
Yes. It is only surprising and looks probabilistic to us, because we do only know a very small part of its state. (This is one of the reasons I believe also in strong AI. But if you want to discuss this, please don't do it here but open a new thread in the appropriate place!)
 
  • #277
A. Neumaier said:
Yes. It is only surprising and looks probabilistic to us, because we do only know a very small part of its state. (This is one of the reasons I believe also in strong AI. But if you want to discuss this, please don't do it here but open a new thread in the appropriate place!)

Sorry. I'm confused by the "looks probabilistic" reference.
 
  • #278
Feeble Wonk said:
Sorry. I'm confused by the "looks probabilistic" reference.
A. Neumaier said:
It is only surprising and looks probabilistic to us, because we do only know a very small part of its state.
Well, if one takes a determinstic dynamical system and look at part of it without knowing the (classical deterministic) state of the remainder (except very roughly) one can no longer make deterministic predictions. But if the part one knows is sufficiently well chosen and one doesn't demand too high accuracy of the predictions (or predictions for too long times) then one can still give a probabilistic reduced dynamics for the known part of the system. Physicists learned with time which systems have this property!

Weather forecast is an example in question. This is considered a completely classical dynamics, but because we have incomplete information we can only make stochastic models for the part we can get data for.

The physical process by which one gets the reduced system description is, on the most accurate level, always the same. It is called the projection operator formalism. There are also technically simpler but less accurate techniques.
 
  • #279
A. Neumaier said:
Well, if one takes a determinstic dynamical system and look at part of it without knowing the (classical deterministic) state of the remainder (except very roughly) one can no longer make deterministic predictions. But if the part one knows is sufficiently well chosen and one doesn't demand too high accuracy of the predictions (or predictions for too long times) then one can still give a probabilistic reduced dynamics for the known part of the system. Physicists learned with time which systems have this property!

Weather forecast is an example in question. This is considered a completely classical dynamics, but because we have incomplete information we can only make stochastic models for the part we can get data for.

The physical process by which one gets the reduced system description is, on the most accurate level, always the same. It is called the projection operator formalism. There are also technically simpler but less accurate techniques.

Yes, and I think that the relationship between determinism and apparent randomness gets at the heart of what is different about quantum mechanics.

Classical systems are nondeterministic for two reasons:
  1. We only know the initial conditions to a certain degree of accuracy. There are many possible states that are consistent with our finite knowledge, and those different states, when evolved forward in time, eventually become macroscopically distinguishable. So future macroscopic conditions are not uniquely determined by present macroscopic conditions.
  2. We only know the conditions in one limited region. Eventually, conditions in other regions will have an effect on this region, and that effect is not predictable.
If we assume (as Einstein did) that causal influences propagate at lightspeed or slower, then we can eliminate the second source of nondeterminism; we don't need to know what conditions are like everywhere, just in the backward lightcone of where we are trying to make a prediction.

So the real weirdness of quantum mechanics is that we have a nondeterminism that doesn't seem to be due to lack of information about the details of the present state.

Or we can put it a different way: Quantum mechanics has a notion of "state" for a system, namely the density matrix, which evolves deterministically with time. But that notion of state does not describe what we actually observe, which is definite outcomes for measurements. The density matrix may describe a system as a 40/60 mixture of two different eigenstates, while our observations show a definite value for whatever observable we measure. So what is the relationship between what we see (definite, nondeterministic results) and what QM describes (deterministic evolution without sharp values for observables)? You could take the approach in classical statistical mechanics; the state (the partition function, or whatever) does not describe a single system, but describes an ensemble of similarly-prepared systems.

But in the case of classical statistical mechanics, it's believed that there are microscopic differences between members of the ensemble, and that these microscopic differences are only captured statistically by the thermodynamic state. It's believed that each member of the ensemble is actually governed by Newtonian physics. So in classical statistical mechanics, there are two different levels of description: A specific element of an ensemble can be described using Newton's laws of motion, while we can take a statistical average over many such elements to get a thermodynamic description, which is more manageable than Newton when the number of components becomes huge.

So if the relationship between the QM state and the actual observed world is the same as for classical statistical mechanics, that QM provides an ensemble view, then that would seem to suggest that there is a missing dynamics for the individual element of the ensemble. In light of experiments such as EPR, it would appear that this missing dynamics for the single system would have to be nonlocal.
 
  • Like
Likes Mentz114
  • #280
stevendaryl said:
this missing dynamics for the single system would have to be nonlocal.
Yes. The missing dynamics is that of the environment.

In all descriptions of Bell-like experiments, the very complex environment (obviously nonlocal, since it is the remainder of the universe) is reduced to one single act - the collapse of the state. Thus even if the universe evolves deterministically, ignoring the environment of a tiny system to this extent is sufficient cause for turning the system into a random one. (The statistical mechanics treatment in the review paper that I cited and you found too long to study tries to do better than just postulating collapse.)

It is our (for reasons of tractability) very simplified models of something that is in reality far more complex that leads to the nondeterminism of the tiny subsystem getting our attention. This is not really different from taking a classical multiparticle system and then considering the dynamics of a subsystem alone - it cannot be deterministic. Take the system of a protein molecule and a drug molecule aimed at blocking it. If you assume a deterministic model for the complete system (using molecular dynamics) to be the true dynamics, and the active sites of both molecules as the reduced system, with the remainder of the molecules assumed rigid (which is a reasonable simplified description), you'll find that the reduced system dynamics (computed from the projection operator formalism) will have inherited randomness from the large system, although the latter is deterministic.

stevendaryl said:
So the real weirdness of quantum mechanics is that we have a nondeterminism that doesn't seem to be due to lack of information about the details of the present state.

No. The real weirdness is that people discuss quantum foundations without taking into account the well-known background knowledge about chaotic systems. They take their toy models for the real thing, and are surprised that there remains unexplained ''irreducible'' randomness.
 
Last edited:
  • Like
Likes Jando
  • #281
stevendaryl said:
If we assume (as Einstein did) that causal influences propagate at lightspeed or slower, then [...] we don't need to know what conditions are like everywhere, just in the backward lightcone of where we are trying to make a prediction.

But we need to know the complete details of the universe in the backward light cones with apex at the spacetime positions at which e measure. This means all the details of the preparation and transmission, including all the details of the preparation equipment and the transmission equipment. For a nonlocal experiment over 1 km, the two backward lightcones span at the time of the preparation of the common signal a spherical region of at least this size, which is a huge nonlocal system on all of whose details the prediction at the final two points may depend.

Thus to ''know what conditions are like just in the backward lightcone'' is a very formidable task, as any lack of detail in our model of what we assume in this light cone contributes to the nondeterminism. You dismiss this task with the single word ''just''.

Not a single paper I have seen takes this glaring loophole into account.
 
Last edited:
  • #282
A. Neumaier said:
Yes. The missing dynamics is that of the environment.

In all descriptions of Bell-like experiments, the very complex environment (obviously nonlocal, since it is the remainder of the universe) is reduced to one single act - the collapse of the state. Thus even if the universe evolves deterministically, ignoring the environment of a tiny system to this extent is sufficient cause for turning the system into a random one. (The statistical mechanics treatment in the review paper that I cited and you found too long to study tries to do better than just postulating collapse.)

Well, that's interesting, but surely that's not a standard view, that the apparent nondeterminism of QM would is resolved by ignored details of the rest of the universe?
 
  • #283
stevendaryl said:
Well, that's interesting, but surely that's not a standard view, that the apparent nondeterminism of QM would be resolved by ignored details of the rest of the universe?
None of my views on the foundations of quantum mechanics, as argued in this thread, is standard. Does it matter? It resolves or at least greatly reduces all quantum mysteries - only that's what matters.

In the past, I had spent a lot of time (too much for the gains I got) studying in detail the available interpretations of QM and found them wanting. Then I noticed more and more small but important things that people ignore routinely in foundational matters although they are discussed elsewhere:

  • It is fairly well known that real measurements are rarely von Neumann measurements but POVM measurement. Nevertheless, people are content to base their foundations on the former.
  • It is well-known that real systems are dissipative, and it is known that these are modeled in the quantum domain by Lindblad equations (lots of quantum optics literature exists on this). Nevertheless, people are content to base their foundations on a conservative (lossless) dynamics.
  • It is well-known how dissipation results from the interaction with the environment. Nevertheless, people are content to ignore the environment in their foundations. (This changed a little with time. There is now often a lip service to decoherence, and also often claims that it settles things when taken together with the traditional assumptions. It doesn't, in my opinion.)
  • It is known (though less well-known) that models in which the electromagnetic field is treated classically and only the detector is quantized produce exactly the same Poisson statistics for photodetection as models employing a quantum field in a coherent state. This conclusively proves that the detector signals are artifacts produced by the detector and cannot be evidence of photons (since they are completely absent in the first model). Nevertheless, people are content to treat in their foundations detector signals as proof of photon arrival.
  • It is well-known that the most fundamental theory of Nature is quantum field theory, in which particles are mere field excitations and not the basic ontological entities. Nevertheless, people are content to treat in their foundations quantum mechanics in terms of particles.
Taken together, I could no longer take seriously the main stream foundational studies, and lost interest in them. Instead, an alternative view formed in my vision and became more and more comprehensive with time. Where others saw weirdness I saw lack of precision in the arguments and arguments with too simplified assumptions, and I saw different ways of phrasing in ordinary language exactly the same math that underlies the standard, misleading language.

This being said, let me finally note that it is well-known that decoherence turns pure states into mixed states. Since pure states form a deterministic quantum dynamics, this shows that, for purely mathematical reasons - and independent of which ontological status one assigns to the wave function - accounting for the unmodelled environment produces statistical randomness in addition to the alleged irreducible quantum randomness inherent in the interpretation of the wave function. Thus, to answer your question,
stevendaryl said:
surely that's not a standard view, that the apparent nondeterminism of QM would be resolved by ignored details of the rest of the universe?
I conclude that it is a standard view that ignoring details of the rest of the universe introduces additional nondeterminism. The only nonstandard detail I am suggesting is that the same mechanism that is already responsible for a large part of the observed nondeterminism (all of statistical mechanics is based on it) can as well be taken to be responsible for all randomness. Together with shifting the emphasis from the wave function (a mathematical tool) to the density matrix (a matrix well-known to contain the physical information, especially the macroscopic, classical one), all of a sudden many things make simple sense. See my post #257 and its context.
Those who believe in the power of Occam's razor should therefore prefer my approach. It also removes one of the philosophical problems of quantum mechanics - to give irreducible randomness an objective meaning.
 
  • #284
A. Neumaier said:
None of my views on the foundations of quantum mechanics, as argued in this thread, is standard. Does it matter? It resolves or at least greatly reduces all quantum mysteries - only that's what matters.

  • It is fairly well known that real measurements are rarely von Neumann measurements but POVM measurement. Nevertheless, people are content to base their foundations on the former.
  • It is well-known that real systems are dissipative, and it is known that these are modeled in the quantum domain by Lindblad equations (lots of quantum optics literature exists on this). Nevertheless, people are content to base their foundations on a conservative (lossless) dynamics.
  • It is well-known how dissipation results from the interaction with the environment. Nevertheless, people are content to ignore the environment in their foundations. (This changed a little with time. There is now often a lip service to decoherence, and also often claims that it settles things when taken together with the traditional assumptions. It doesn't, in my opinion.)
  • It is known (though less well-known) that models in which the electromagnetic field is treated classically and only the detector is quantized produce exactly the same Poisson statistics for photodetection as models employing a quantum field in a coherent state. This conclusively proves that the detector signals are artifacts produced by the detector and cannot be evidence of photons (since they are completely absent in the first model). Nevertheless, people are content to treat in their foundations detector signals as proof of photon arrival.
  • It is well-known that the most fundamental theory of Nature is quantum field theory, in which particles are mere field excitations and not the basic ontological entities. Nevertheless, people are content to treat in their foundations quantum mechanics in terms of particles.
I agree with all of that, but I'm not at all convinced that taking into account all of that complexity makes any difference. There is a reason that discussions of Bell's inequality and other foundational issues use simplified models, and that is that reasoning about the more realistic models is much more difficult. The assumption is that if we can understand what is going on in the more abstract model, then we can extend that understanding to more realistic models. It's sort of like how when Einstein was reasoning about SR, he used idealized clocks and light signals, and didn't try to take into account that clocks might be damaged by rapid acceleration, or that the timing of arrival of a light signal may be ambiguous, etc. To make the judgment that a simplified model captures the essence of a conceptual problem is certainly error-prone, and any conclusion someone comes to is always eligible to be re-opened if someone argues that more realistic details would invalidate the conclusion.

But in the case of QM, I really don't have a feeling that any of the difficulties with interpreting QM are resolved by the complexities you bring up. It seems to me, on the contrary, that the complexities can't possibly resolve them in the way you seem to be suggesting.

Whether it's QM or QFT, you have the same situation:
  • You have an experiment that involves a measurement with some set of possible outcomes: o_1, o_2, ..., o_N
  • You use your theory to predict probabilities for each outcome: p_1, p_2, ..., p_N
  • You perform the measurement and get some particular outcome: o_j
  • Presumably, if you repeat the measurement often enough with the same initial conditions, the relative frequency of getting o_j will approach p_j. (If not, your theory is wrong, or you're making some error in your experimental setup, or in your calculations, or something)
What you seem to be saying is that the outcome o_j is actually determined by the details you left out of your analysis. That seems completely implausible to me, in light of the EPR experiment (unless, as in Bohmian mechanics, the details have a nonlocal effect). In EPR, Alice and Bob are far apart. Alice performs a spin measurement along a particular axis, and the theory says that she will get spin-up with probability 1/2 and spin-down with probability 1/2. It's certainly plausible, considering Alice's result in isolation, that the details of her measuring device, or the electromagnetic field, or the atmosphere in the neighborhood of her measurement might affect the measurement process, so that the result is actually deterministic, and the 50/50 probability is some kind of averaging over ignored details. But that possibility becomes completely implausible when you take into account the perfect anti-correlation between her result and Bob's. How do the details of Bob's device happen to always produce the opposite effect of the details of Alice's device?

I understand that you can claim that in reality, the anti-correlation isn't perfect. Maybe it's only 90% anti-correlation, or whatever. But that doesn't really change the implausibility much. In those 90% of the cases where they get opposite results, it seems to me that either the details of Bob's and Alice's devices are irrelevant, or that mysteriously, the details are perfectly matched to produce opposite results. I just don't believe that that makes sense. Another argument that it can't be the details of their devices that make the difference is that it is possible to produce electrons that are guaranteed to be spin-up along a certain axis. Then we can test whether Alice always gets spin-up, or whether the details of her measuring device sometimes convert that into spin-down. That way, we can get an estimate as to the importance of those details. My guess is that they aren't important, but I need somebody who knows about experimental results to confirm or contradict that guess.

So if the ignored, microscopic details of Alice's and Bob's devices aren't important (and I just don't see how they plausibly can be), that leaves the ignored environment: the rest of the universe. Can details about the rest of the universe be what determines Alice's and Bob's outcomes? To me, that sounds like a hidden-variables theory of exactly the type that Bell tried to rule out. The hidden variable \lambda in his analysis just represents any details that are common to Alice's and Bob's measurements. The common environment would certainly count. Of course, Bell's proof might have loopholes that haven't been completely closed. But it seems very implausible to me.

What I would like to see is some kind of simulation of the EPR experiment in which the supposed nondeterminism is actually resolved by the ignored details. That's what would convince me.
 
  • #285
stevendaryl said:
Another argument that it can't be the details of their devices that make the difference is that it is possible to produce electrons that are guaranteed to be spin-up along a certain axis. Then we can test whether Alice always gets spin-up, or whether the details of her measuring device sometimes convert that into spin-down. That way, we can get an estimate as to the importance of those details. My guess is that they aren't important, but I need somebody who knows about experimental results to confirm or contradict that guess.
If the input is all-spin-up and the measurement tests for spin-up, the result will be deterministic independent of the details of the detector. But if the input is all spin-up and the measurement tests in another direction, the random result will be produced by the detector. Both can be seen by considering a model that inputs a classical polarized field and uses a quantum detector sensitive to the polarization direction.
stevendaryl said:
So if the ignored, microscopic details of Alice's and Bob's devices aren't important (and I just don't see how they plausibly can be), that leaves the ignored environment: the rest of the universe.
The ignored environment includes the microscopic details of Alice's and Bob's devices and how they were influenced by the common past. As I haven't done the calculations (remember the report I linked to needed 150 pages to make the case in the particular models studied there) I cannot tell what would be the mathematical result but I suspect it would just give what is actually observed.

But you are mixing two topics that should be kept separate - the question of whether perfect anticorrelations can be explained classically, and the question of whether quantum randomness can be explained by restricting the deterministic quantum dynamics of the universe. Deterministic is far from equivalent with classical and/or Bell-local! Therefore these are very different questions.

The quantum mechanical correlations observed in a tiny quantum system come from the quantum mechanical dynamics of the density matrix of the universe - there is nothing classical in the latter, hence one shouldn't expect that restriction to a tiny subsystem would be classical. On the contrary, all we know about the many actually studied subsystems of slightly larger quantum systems indicates that one gets exactly the usual quantum descriptions of the isolated subsystem, plus correction terms that account for additional randomness - decoherence effects, etc.. There is no ground at all to think that this should becomedifferent when the systems get larger, and ultimately universe-sized.
 
  • #286
A. Neumaier said:
The ignored environment includes the microscopic details of Alice's and Bob's devices and how they were influenced by the common past.

But it seems to me that the perfect anti-correlations imply that the details of Alice's and Bob's devices AREN'T important. Alice can independently fool with the details of her device, and that won't upset the perfect anti-correlations with Bob's measurement.

But you are mixing two topics that should be kept separate - the question of whether perfect anticorrelations can be explained classically, and the question of whether quantum randomness can be explained by restricting the deterministic quantum dynamics of the universe. Deterministic is far from equivalent with classical and/or Bell-local! Therefore these are very different questions.

Yes, I agree that they are different questions, but as I said, I find the idea that quantum nondeterminism can be explained through ignored details about the rest of the universe to be sufficiently like the classical case that I am very dubious that it can be made to work. There are more exotic variants of this idea, which is the Bohmian approach (the extra details to resolve the nondeterminism are nonlocal) or the retrocausal approach (the extra details are found in the future, not in the present). But I find it very implausible that extra details about the causal past can possibly explain the nondeterminism. As I said, it would take a simulation (or a calculation, if I could follow it) to convince me of such a resolution. I am not a professional physicist, so I don't have the qualifications or knowledge to state this with certainty, but it seems to me that your suggestion might be provably impossible.
 
  • #287
stevendaryl said:
I am not a professional physicist, so I don't have the qualifications or knowledge to state this with certainty, but it seems to me that your suggestion might be provably impossible.

To me it seems most likely, of course unless we're drifting into a superdeterministic interpretation which is a feeling I'm getting.

Also I'm still not even clear what exactly is being argued: the idealized model was rejected without any attempt at reframing it in this new view so we didn't get any good look at it.

Not to be a bore but I think, to get at anything conclusive, the best bet is to go for last year's loophole-free experimental test. It is a realistic example and the setup is, after all, relatively simple. The problem is, honestly, we don't really believe in this idea, the burden of proof doesn't lie in the accepted framework (however unfair it may appear to be).
 
  • #288
ddd123 said:
To me it seems most likely, of course unless we're drifting into a superdeterministic interpretation which is a feeling I'm getting.

Yeah, well, superdeterminism is very irksome for philosophical and scientific reasons, but sometimes I wonder it it really is the answer. We think of the choices we make (about whether to measure this or that) as freely chosen, but since we are physical systems, obeying the same laws of physics as electrons, at some level, we no more choose what we do than an electron does.
 
  • #289
stevendaryl said:
Yeah, well, superdeterminism is very irksome for philosophical and scientific reasons, but sometimes I wonder it it really is the answer. We think of the choices we make (about whether to measure this or that) as freely chosen, but since we are physical systems, obeying the same laws of physics as electrons, at some level, we no more choose what we do than an electron does.

I don't think that's the problem. It's that the superdeterministic law would have to be concocted specifically to counter our fiddling with the instruments. It's more anthropocentric, not less, imho.
 
  • #290
ddd123 said:
I don't think that's the problem. It's that the superdeterministic law would have to be concocted specifically to counter our fiddling with the instruments. It's more anthropocentric, not less, imho.

I think that depends on the details of the superdeterministic theory. Just saying that there is a conspiracy is pretty worthless, but if someone could give a plausible answer to how the conspiracy is implemented, it might not be objectionable.
 
  • #291
stevendaryl said:
I think that depends on the details of the superdeterministic theory. Just saying that there is a conspiracy is pretty worthless, but if someone could give a plausible answer to how the conspiracy is implemented, it might not be objectionable.

Yes, I really can't imagine how that could be though. If such a theory ends up being non-magical-looking, wouldn't it be just a local realistic one, and thus nonexistent?
 
  • #292
ddd123 said:
Yes, I really can't imagine how that could be though. If such a theory ends up being non-magical-looking, wouldn't it be just a local realistic one, and thus nonexistent?

No. What Bell ruled out was the possibility of explaining the outcome of the EPR experiment by a function of the form:

P(A\ \&\ B\ |\ \alpha, \beta) = \sum_\lambda P(\lambda)P(A\ |\ \lambda, \alpha) P(B\ |\ \lambda, \beta)

A superdeterministic theory would modify this to
P(A\ \&\ B\ |\ \alpha, \beta) = [ \sum_\lambda P(\lambda)P(A\ \& \alpha |\ \lambda) P(B\ \& \beta |\ \lambda)]/P(\alpha\ \&\ \beta)

Alice and Bob's settings \alpha and \beta would not be assumed independent of \lambda. That's a different assumption, and the fact that the former is impossible doesn't imply that the latter is impossible.
 
  • #293
stevendaryl said:
I find the idea that quantum nondeterminism can be explained through ignored details about the rest of the universe to be sufficiently like the classical case that I am very dubious that it can be made to work.
I don't think this can be made to work.

But you misunderstood me. I am only claiming the first part, ''that quantum nondeterminism can be explained through ignored details about the rest of the universe'', but not that it makes the explanation sufficiently classical. It makes the explanation only deterministic, which for me is something completely different. Nevertheless it is a step forward. Unlike Bohmian mechanics it needs not the slightest alterations to the quantum formalism.
 
  • #294
ddd123 said:
the idealized model was rejected without any attempt at reframing it
So far I didn't discuss it in detail only because I didn't get so far the requested reassurance that there wouldn't be any further shifting of ground like ''but I had intended ...'', or ''but there is another experiment where ...'', or ''but if you modify the setting such that ...'', where ... are changes in the precise description for which my analysis (of adequacy to the real world, and of similarity to classical situations) would no longer be appropriate.

Once it is clear which absolutely fixed setting is under discussion, with all relevant details, assumptions, and arguments for its weirdness fully spelled out, I'll discuss the model.
 
Last edited:
  • #295
A. Neumaier said:
I don't think this can be made to work.

But you misunderstood me. I am only claiming the first part, ''that quantum nondeterminism can be explained through ignored details about the rest of the universe'', but not that it makes the explanation sufficiently classical.

Well, regardless of whether it's classical or not, I don't believe that it is possible without "exotic" notions of ignored details (such as those that work backward in time or FTL).

It makes the explanation only deterministic, which for me is something completely different. Nevertheless it is a step forward. Unlike Bohmian mechanics it needs not the slightest alterations to the quantum formalism.

Well, if it works. That's what I find doubtful. Quantum mechanics through the Born rule gives probabilities for outcomes. For pure QM to give deterministic results means that the evolution of the wave function, when you take into account all the details of the environment, makes every probability go to either 0 or 1. That does not seem consistent with the linearity of quantum mechanics. If you have a wave function for the whole universe that represents Alice definitely getting spin-up, and you have a different wave function that represents Alice definitely getting spin-down, then the superposition of the two gives a wave function that represents Alice in an indeterminate state. So to me, either you go to Many Worlds, where both possibilities occur, or you go to something beyond pure QM, such as Bohm or collapse.
 
  • #296
stevendaryl said:
when you take into account all the details of the environment, makes every probability go to either 0 or 1. That does not seem consistent with the linearity of quantum mechanics.
Quantum mechanics is linear (von Neumann equation ##i \hbar \dot\rho=[H,\rho]##) only in the variables ##\rho## that we do not have experimental access to when the system has more than a few degrees of freedom (i.e., when a measuring device is involved). But it is highly nonlinear and chaotic in the variables that are measurable.

This can be seen already classically.
The analogue of the von Neumann equation for a classical multiparticle system is the Liouville equation ##\dot\rho=\{\rho,H\}##, and is also linear. But it describes faithfully the full nonlinear dynamics of the classical multiparticle system! The nonlinearities appear once one interprets the system in terms of the observable variables, whehe one gets through the nonlinear BBGKY hierarchy the nonlinear Boltzmann equation of kinetic theory and the nonlinear Navier-Stokes equations of hydrodynamics.

Similarly, one can derive the nonlinear Navier-Stokes equations of hydrodynamics also from quantum mechanics.

Note also that many of the technical devices of everyday live that produce discrete results and change in a discrete fashion are also governed by nonlinear differential equations. It is well-known how to get bistability in a classical dissipative system from a continuous nonlinear dynamics involving a double well potential! There is nothing mysterious at all in always getting one of two possible definite discrete answers in a more or less random fashion from a nonlinear classical dynamics, which becomes a linear dynamics once formulated (fully equivalently) as a dynamics of phase space functions, which is the classical analogue (and classical limit) of the linear Ehrenfest equation for quantum systems.
 
  • #297
stevendaryl said:
without "exotic" notions of ignored details
The notion of ignored details I am referring to is nothing exotic at all but technically precisely the same routinely applied in the projection operator technique for deriving the equations for a reduced description. It is a very standard technique from statistical mechanics that can be applied (with a slightly different setting in each case) to a variety of situations, and in particular to the one of interest here (contraction of a quantum Liouville quation to a Lindblad equation for a small subsystem). The necessary background can be found in a book by Grabert. (Sorry, again more than a few pages only.)
 
  • #298
A. Neumaier said:
Quantum mechanics is linear (von Neumann equation ##i \hbar \dot\rho=[H,\rho]##) only in the variables ##\rho## that we do not have experimental access to when the system has more than a few degrees of freedom (i.e., when a measuring device is involved). But it is highly nonlinear and chaotic in the variables that are measurable.

As I said, what I would like to see is a demonstration (simulation, or derivation) that the evolution equations of QM (or QFT) lead to (in typical circumstances) selection of a single outcome out of a set of possible outcomes to a measurement. Is there really any reason to believe that happens? I would think that there is not; as a matter of fact, I would think that somebody smarter than me could prove that it doesn't happen. I'm certainly happy to be wrong about this.
 
  • #299
stevendaryl said:
I would like to see is a demonstration (simulation, or derivation) that the evolution equations of QM (or QFT) lead to (in typical circumstances) selection of a single outcome out of a set of possible outcomes
It is nothing particularly demanding, just a lot of technical work to get it right - like every detailed derivation in statistical mechanics. If I find the time I'll give a proper derivation - but surely not in the next few days, as it is the amount of work needed for writing a research paper.

Therefore I had pointed to an analogous result for a classical bistable potential. A 2-state quantum system (elecron with two basis states ''bound'' and ''free'', the minimal quantum measurement device) behaves qualitatively very similarly.
 
  • #300
A. Neumaier said:
Therefore I had pointed to an analogous result for a classical bistable potential. A 2-state quantum system (elecron with two basis states ''bound'' and ''free'', the minimal quantum measurement device) behaves qualitatively very similarly.

I understand how bistable potentials can be similar in some respects, but I don't think that works for distant correlations such as EPR. That's the demonstration that I would like to see: show how tiny details cause Alice and Bob to get definite, opposite values in the case where they are measuring spins along the same direction.
 

Similar threads

Replies
6
Views
3K
Replies
0
Views
8K
Replies
2
Views
2K
Replies
7
Views
826
Replies
2
Views
1K
Back
Top