Nobody understands quantum physics?

  • #201
There's no resolving this in this thread. Simple question insists/believes Nature must be deterministic. Morbert and vanhees71 are pointing out that our most successful physical theory is stochastic.
This is no different from demanding a mechanical exposition of the aether for electromagnetism and is scientifically regressive.
 
  • Like
Likes gentzen and martinbn
Physics news on Phys.org
  • #202
vanhees71 said:
Yes, and QT teaches us that things are not predetermined, and the randomness for the outcome of measurements is an inherent property of Nature and not due to the physicist's ignorance. That's the great result of Bell's theoretical work and the outcome of the corresponding experimental "Bell tests".
Agreed. However this was not something I questioned. My point doesn't come across your philosophy filter :)

/Fredrik
 
  • Haha
Likes vanhees71
  • #203
Morbert said:
The goal of explaining why one outcome occurred instead of possible alternatives might be a personal one, but it cannot be insisted as a goal of theoretical physics. By this I mean a theory concerned with probabilities for alternative possibilities, but not the actualisation of one possibility, is not inherently a problem to theoretical physics even if it motivates particular research programs like Bohmian mechanics.
Is there anything that can be insisted as a goal of theoretical physics? What is inherently a problem to theoretical physics?

My problem with consistent histories interpretation (CHI) is not that it doesn't pick one of the alternatives. My problem with it is that it doesn't even pick one consistent set of the alternatives. I would expect that theoretical physics should at least pick one consistent set (for example the set of all possible particle positions, or some other set), but CHI does not do even that. In a sense, the CHI is not one theory of nature, but a theory of "all" possible consistent theories of nature, where each consistent framework defines one possible consistent theory. CHI is a theory of theories, a meta-theory. Would you suggest that the inherent problem of theoretical physics is to develop a meta-theory, rather than a theory?
 
  • #204
Morbert said:
Quantum theories are the most experimentally verified theories we have.
The predictions of QM are spectacular only in the cases where its predictions are deterministic, for example in the case of g-2 in quantum electrodynamics. The probabilistic predictions of QM, on the other hand, are good, but not that spectacular.

Perhaps the only probabilistic prediction of QM which agrees with experiments to a very high precision is the Planck distribution of thermal radiation, but this is a very generic prediction that does not even depend on validity of the Born rule.
 
  • Like
Likes Simple question and A. Neumaier
  • #205
LittleSchwinger said:
There's no resolving this in this thread. Simple question insists/believes Nature must be deterministic.
Well, I've never seen unicorns, and conservation laws seems pretty real to me. Both theoretically and physically.
As to how Nature conserve probability in not only a deterministic fashion, but non-locally, is indeed a nut I would like to see cracked.

LittleSchwinger said:
Morbert and vanhees71 are pointing out that our most successful physical theory is stochastic.
Do you mean stochastic thermodynamics ? The way some people here are carelessly throwing around "most successful" around here is baffling. QM is a good theory, no doubt. Stochastic theories have an uncanny way to capture truth about Nature. But it is NOT very useful in practice, compared to any other theoretical framework, on top of which civilization was actually build, and still is.

You see, this is not an opinion peace. Success also can be measured. You may prefer counting papers, I prefer counting bridges.

LittleSchwinger said:
This is no different from demanding a mechanical exposition of the aether for electromagnetism and is scientifically regressive.
This red herring is worn-out to the point of being comical. Fields are fine. Hilbert spaces are fine. Any idea that is useful is fine by me. Any idea that contains glaring self contradiction, weak spot, incompleteness, or down right lack of foundation, needs to be fixed in some way. It is that simple.

QM have a strange intoxicating effect on some folks, that start calling for things nobody orders, like "quantum gravity". Did Einstein called for a GR'ed version of chemistry ?

I personally think that critique is good. But if you start bashing on people like Feynman, I think arguments should be a little less flimsy.
 
  • #206
Morbert said:
(My emphasis) Modelling the interaction between the particle in spatial superposition and the detector array with a quantum theory will result in a new superposition state that entangles the particle with the detector.
But coarse-grained macroscopic physics has no superpositions. Thus coarse-graining does not explain everything about the detector, since it does not explain the actual outcome - unless one treats the detector as classical and hence can apply Born's rule to select one outcome stochastically. Therefore
A. Neumaier said:
Once one jointly claims (as vanhees71 does)
  • that there is no split between quantum and classical, and
  • that coarse-graining explains everything about the detector,
the measurement problem becomes unavoidable.
Morbert said:
My question: Why is this a problem? Why must we explain in particular the definite outcome? Why can't we accept QM as always treating all possible outcomes on equal footing apart from their probabilities?
One must explain it only if one claims the two points mentioned in my previous post.
Morbert said:
It is the rate of events that is explained/predicted, as opposed to the particular event of a single run right?
But an event is a classical concept, there are no events in unitary quantum theory. Your argumentation therefore imposes classical concepts upon quantum mechanics of macroscopic bodies.
Morbert said:
The goal of explaining why one outcome occurred instead of possible alternatives might be a personal one, but it cannot be insisted as a goal of theoretical physics. By this I mean a theory concerned with probabilities for alternative possibilities,
Only because your personal choice is to define theoretical physics in terms of probabilities.
 
  • #207
Demystifier said:
The predictions of QM are spectacular only in the cases where its predictions are deterministic, for example in the case of g-2 in quantum electrodynamics. The probabilistic predictions of QM, on the other hand, are good, but not that spectacular.

Perhaps the only probabilistic prediction of QM which agrees with experiments to a very high precision is the Planck distribution of thermal radiation, but this is a very generic prediction that does not even depend on validity of the Born rule.
What do you mean by "deterministic"? ##g-2## of electrons or muons is a parameter that can be predicted by the standard model after defining all the coupling constants of this model and as such can be tested by measuring it in experiment. The measurement of the magnetic moment of particles is as "probabilistic" as it is for any other observable in QT.

The Planck distribution of course is calculated by using Born's rule. How else do you want to interpret the statistical equilibrium operator you calculate for free photons to obtain this result?
 
  • #208
Simple question said:
Do you mean stochastic thermodynamics ? The way some people here are carelessly throwing around "most successful" around here is baffling. QM is a good theory, no doubt. Stochastic theories have an uncanny way to capture truth about Nature. But it is NOT very useful in practice, compared to any other theoretical framework, on top of which civilization was actually build, and still is.
This is a joke, isn't it? QT is not only the most comprehensive theory about matter but also it's the most important fundamental theory our modern technology is based on. Sitting behind a laptop, typing in a message that I can send within a few moments to be readable world wide were completely impossible without the use of QT, leading to the development of modern solid-state and semiconductor physics, the transistor, integrated circuits and all that.
Simple question said:
QM have a strange intoxicating effect on some folks, that start calling for things nobody orders, like "quantum gravity". Did Einstein called for a GR'ed version of chemistry ?
Einstein called for GR, because there didn't exist a consistent description of the gravitational interaction within relativity theory. What was known was the Newtonian approximation, which is an action-at-a-distance description, which is not compatible with (special) relativistic causality and spacetime structure. The motivation was to find a relativistic theory for the gravitational interaction, and it lead Einstein to the other corner stone of modern physics, i.e., GR. The success is also not only theoretical or academical, but at least the possibility for navigation with the GPS is entirely based on the findings of GR on the spacetime model, particularly the influence of the gravitational interaction on time measurement, whose accuracy is crucial for the GPS to enable us to localize points on Earth at a precision of a few meters needed to navigate from one point to the other.

After as non-relativistic QM has been formualted in 1925/26, because it was necessary to understand the atomistic structure of the matter around us, as well as the equilibrium distribution of radiation, i.e., Planck's radiation law, which started the discovery of QM in 1900, also relativistic QFT was a necessary consequence, because non-relativistic QM was not compatible with the relativistic spacetime model and thus a relativistic QT had to be developed. It lead to another great success in form of the Standard Model of elementary particles.

It is thus a natural consequence to also look for a quantum theory of gravitation to make also the gravitational interaction consistent with relativity. Relativistic QFT is incomplete in the sense that it doesn't include a satisfactory description of the gravitational interaction, and thus one must look for a new theory, which is even more comprehensive than it in also including gravity.
Simple question said:
I personally think that critique is good. But if you start bashing on people like Feynman, I think arguments should be a little less flimsy.
Nobody bashes Feynman!
 
  • Like
Likes LittleSchwinger
  • #209
Demystifier said:
Is there anything that can be insisted as a goal of theoretical physics? What is inherently a problem to theoretical physics?
I just mean the problem of indeterminism in QM is a subjective one, as opposed to an objective problem like e.g. the recovery of the Born rule in the many-worlds interpretation.
My problem with consistent histories interpretation (CHI) is not that it doesn't pick one of the alternatives. My problem with it is that it doesn't even pick one consistent set of the alternatives. I would expect that theoretical physics should at least pick one consistent set (for example the set of all possible particle positions, or some other set), but CHI does not do even that. In a sense, the CHI is not one theory of nature, but a theory of "all" possible consistent theories of nature, where each consistent framework defines one possible consistent theory. CHI is a theory of theories, a meta-theory. Would you suggest that the inherent problem of theoretical physics is to develop a meta-theory, rather than a theory?
What I am defending in this thread is the indeterministic character of QM. A defense of CHI would muddy the waters. For the purposes of this thread we can probably just take sets of histories as suggested procedures for constructing POVMS. I.e. When considering a measured system ##s## and detector array ##D## with a POVM ##\{\Delta(\alpha)\}##, we can construct this POVM from any set of histories ##\{C_\alpha\}## where ##\Delta(\alpha) = \mathrm{tr}_DC_\alpha^\dagger C_\alpha \rho_D## holds. Histories are fictitious sequences of events that reproduce detector responses.
The predictions of QM are spectacular only in the cases where its predictions are deterministic, for example in the case of g-2 in quantum electrodynamics. The probabilistic predictions of QM, on the other hand, are good, but not that spectacular.
I'm not sure about the distinction here. E.g. QM is important for determining electrical characteristics in nanoscale devices or reaction rates and reaction pathways in chemical physics, but it can still be indeterministic in the Schwinger sense that precise knowledge of the quantum states does not precisely determine all events.
 
  • Like
Likes gentzen
  • #210
A. Neumaier said:
But an event is a classical concept, there are no events in unitary quantum theory.
Strange. The emission of a photon is not an event? Quantum theory does not describe this? Do you envision the decay of a neutron as a gradual, continuous process, with the neutron slowly turning into a proton? This doesn't seem to be mainstream physics.
 
  • Like
Likes vanhees71
  • #211
WernerQH said:
Strange. The emission of a photon is not an event? Quantum theory does not describe this?
I think Neumaiers point is that the unitary evolution does not involve any events. It is an expectation of what happens in between events which are just the end points.

To solve the measurement problem, we need for example a unitary description of the actual measurement, but the problem is that this involves a classical system, and the classical backround is assumed part of the observing context. The unitary description would have to deal with how the classical detection events that are supposedly real, can be allow when it's at hte same time in superposition. We need to crack the sort of implicit duality of measurement process and a regular physical interaction. Decoherence just solves this by saying that the classical system really isn't classical! It's just a "complex" quantum system. But this pushes the "scientific inference" out to imaginary observes at some point. This is why i objected to. In the extremes this imaginary observer either becomes a black hole, or somehow asymptotic in it's existence (which is useless, as it's not where we "live").

/Fredrik
 
  • Like
Likes gentzen
  • #212
WernerQH said:
Strange. The emission of a photon is not an event? Quantum theory does not describe this?
An event with an exact spacetime location would be a classical concept. Of course, you could mean something else by that word. But if I remember correctly, you previously indicated that you indeed mean such an exact spacetime location.

WernerQH said:
Do you envision the decay of a neutron as a gradual, continuous process, with the neutron slowly turning into a proton?
Why do you write "envision"? What is unclear about the concept of "unitary quantum mechanics" for you? As long as you don't explicitly measure, the probability that the neutron has turned into a proton indeed gradually increases. And if you explicitly measure, then you learn whether or not it has turned into a proton, but not when or where exactly that happened.

WernerQH said:
This doesn't seem to be mainstream physics.
I get the impression that it is more mainstream physics than what you seem to suggest instead.
 
  • #213
WernerQH said:
Strange. The emission of a photon is not an event? Quantum theory does not describe this?
There is no notion of an event in quantum theory (without an interpretation in classical terms). The whole quantum formalism proceeds without the concept of an event. The theory only describes the dynamics of photon states and S-matrices. but not the associated events in real life.

Events do not appear in quantum theory but only in quantum practice, namely when the formalism is interpreted in terms of measurement.

WernerQH said:
Do you envision the decay of a neutron as a gradual, continuous process, with the neutron slowly turning into a proton? This doesn't seem to be mainstream physics.
Mainstream quantum physics says that the wave function dynamics is a continuous process, governed by the Schrödinger equation. Even decoherence arguments assume that decay takes time.

In quantum theory, a neutron never turns into a proton, but a superposition of both dynamically changes weights. The events appear - outside the strict theory - upon interpreting the quantum results in terms of what can be observed.
 
Last edited:
  • Like
Likes physika, dextercioby, Simple question and 2 others
  • #214
For experiments on QM systems, a single event doesn't constitute a measurement. Just because my magnetic monopole detector clicked one time late at night doesn't mean that magnetic monopoles have been shown to exist by measurement. The rules of QM (and those of most reputable journals) are about measurements which of necessity comprised of multiple events. You have to do the statistics. This remains true even in the edge cases in which the outcome of an event is probability 1. I think this distinction between isolated event and measurements is blurred in the minds of many.
 
  • #215
Of course quantum dynamics describes the ##\beta## decay of a neutron. A neutron is not an energy eigenstate when taking into account the weak interaction, and thus having prepared a neutron at time ##t=0## the unitary time evolution taking into account the weak interaction leads to a state, where the probability to find a proton, an electron and an electron-antineutrino at any time ##t>0##. So indeed QT unitary time evolution describes the ##\beta## decay of a neutron.

Of course as any quantum state also in this situation it describes the probability to find at some given time ##t>0## a proton, electron, and electron-antineutrino instead of a neutron. It's generically impossible to know when an individual neutron decays. The survival probability for the neutron (considered to be at rest) is approximately given by the "radioactive decay law", ##P(t)=\exp(-t/\tau)##, where ##\tau## is the lifetime of the neutron (a pretty delicate quantity, but that's another story ;-)).
 
  • Like
Likes gentzen
  • #216
A. Neumaier said:
But an event is a classical concept, there are no events in unitary quantum theory. Your argumentation therefore imposes classical concepts upon quantum mechanics of macroscopic bodies.
I'm using event in the probability theory sense. When we consider the state space of the source + detector ##\mathcal{H}_s\otimes\mathcal{H}_D##, we build a sample space of measurement outcomes ##\{\Pi_i\}##with a projective decomposition of the identity ##I = \sum_i \Pi_i##. We can then build an event algebra from all projectors of the form ##P_j = \sum_j\lambda_j \Pi_j## where ##\lambda_j## is either zero or one.
 
  • Like
Likes gentzen
  • #217
Paul Colby said:
For experiments on QM systems, a single event doesn't constitute a measurement.
...
You have to do the statistics
...
I think this distinction between isolated event and measurements is blurred in the minds of many.
The "problem" with QM as it stands is exactly that it lives only at the perfect statistical level. But real inferences invariable works with limited sampling, and limited processing capacity. This is ultimately why QM appears timeless and asymptotic.

A real agent/observer, can for multiple reasons not ever attain perfect knowledge of the statistics (the distributions), as there is a race condition where subject matter changes during the inference process and that "perfect" postprocessing or decoding to abduce patters simply take time. And it may not be possible even to repeat the experiment. So the agent decision has to be made on availabe imperfect conclusions.

This "intermediate" process is where QM doens't work. QM works where a relative unlimited or "sufficient" statistics can be compiled and where the agent is never saturated with information.

So while a QM experience goes in principle like this: A preparation prodecure must be defined and tested to make sure we are confidence about the distributions coming out from it. Then the "final state" also reequires repeating this enough times that we can within the accuracy determined the "distribution" of final detections. Suhc an "experiment" is clearly not how a normal interaction takes place. A normal interaction happens once, and them moves on to the next (which is typically not pulled from the same ensemble).

So while I share that the minimal ensemble interpretation is quite correct, in how QM is corroborated, it is also precisely what makes it problematic, when applying it to any real scenario which is not a controlled experiment that is repeated 1000 times.

/Fredrik
 
  • #218
Before someone says, it's just the same in classical physics...
Fra said:
So the agent decision has to be made on availabe imperfect conclusions.
This is the difference, if you after solving hte measurement problem see that we need to unify measurement and physical interactions. So this imperfection (as opposed to just physicists imperfect knowledge) should in theory give observable consequences for the agents interactions. (On par with bell entanglement for eample, but more involved).

/Fredrik
 
  • #219
Morbert said:
I'm using event in the probability theory sense. When we consider the state space of the source + detector ##\mathcal{H}_s\otimes\mathcal{H}_D##, we build a sample space of measurement outcomes ##\{\Pi_i\}##with a projective decomposition of the identity ##I = \sum_i \Pi_i##. We can then build an event algebra from all projectors of the form ##P_j = \sum_j\lambda_j \Pi_j## where ##\lambda_j## is either zero or one.
According to the explanation of event given in Wikiedia, it means that all events occur in every experiment (if all probabilities are nonzero). This is surely not the case in physical experiments. Thus either the notion of events or the notion of experiment (as used in Wikipedia) is physically irrelevant. Therefore your argument is spurious.
 
Last edited:
  • Skeptical
Likes WernerQH
  • #220
A. Neumaier said:
There is no notion of an event in quantum theory (without an interpretation in classical terms). The whole quantum formalism proceeds without the concept of an event.
So Feynman diagrams do not describe anything real? Don't they play a role in the theory? Theorists derive them using perturbation theory and asymptotic states, but is it in your view coincidental that many physicists think of them as describing real processes?
A. Neumaier said:
Events do not appear in quantum theory but only in quantum practice, namely when the formalism is interpreted in terms of measurement.
Planck was forced to think of the emission of radiation as a discontinuous process. It is ironic that you seem to suggest that the real world changes continuously.
A. Neumaier said:
Mainstream quantum physics says that the wave function is a continuous process, governed by the Schrödinger equation.
Unitary evolution according to Schrödinger's equation can't be the whole story. Microscopic events add discreteness and randomness to the picture, and there is ample evidence for them. Why do you call events a classical notion? I think that's a distortion.
 
  • #221
Paul Colby said:
For experiments on QM systems, a single event doesn't constitute a measurement.
This is a very sensible point of view. In this view, what is measured are never events. Indeed, assuming your statement, an induction argument shows that any number of events is not a measurement.

Thus only what is computed from a number of events is a measurement - with an uncertainty determined by the statistics. This means that the true observables (and indeed, what is reported in papers on experimental quantum physics) are probabilities and expectations, and not eigenvalues. This is precisely the point of view of my thermal interpretation.

Paul Colby said:
For experiments on QM systems, a single event doesn't constitute a measurement.
But this is not the usage in quantum foundations. There a response of the detector is regarded as a measurment of the presence of a single photon. The only question is whether the response measured a stray photon or one that was deliberately sent.
 
  • Like
Likes Simple question and gentzen
  • #222
gentzen said:
As long as you don't explicitly measure, the probability that the neutron has turned into a proton indeed gradually increases.
Yes, that's the theorist's picture. The experimentalist knows that the neutron decays even when he's not "measuring", and on a time scale shorter than microseconds.
 
  • #223
WernerQH said:
So Feynman diagrams do not describe anything real?
Only the integration over 4 real variables.
WernerQH said:
Don't they play a role in the theory?
Only as illustrations.
WernerQH said:
Theorists derive them using perturbation theory and asymptotic states, but is it in your view coincidental that many physicists think of them as describing real processes?
Yes. It is just informal imagery for formally precise integration procedures. See
https://www.physicsforums.com/insights/vacuum-fluctuation-myth/
WernerQH said:
Planck was forced to think of the emission of radiation as a discontinuous process. It is ironic that you seem to suggest that the real world changes continuously.
30 years later, quantum statistical mechanics explained the Planck spectrum without any recourse to discontinuity.
WernerQH said:
Unitary evolution according to Schrödinger's equation can't be the whole story.
Perhaps not. What else do you advocate?
WernerQH said:
Microscopic events add discreteness and randomness to the picture, and there is ample evidence for them. Why do you call events a classical notion? I think that's a distortion.
Microscopic events cause discreteness also in classical mechanics. Whenever you switch on your computer you are doing something discrete which is microscopically continuous.
 
  • Like
Likes Motore and gentzen
  • #224
WernerQH said:
Yes, that's the theorist's picture. The experimentalist knows that the neutron decays even when he's not "measuring",
because the environment does enough measuring. Anthropomorphic detectors are not needed for that, only for isolating things to be able to study them quantitatively.
WernerQH said:
and on a time scale shorter than microseconds.
Yes, but nobody has ever seen a neutron discontinuously turn into a proton.
 
  • Like
Likes gentzen
  • #225
WernerQH said:
So Feynman diagrams do not describe anything real? Don't they play a role in the theory?
A. Neumaier said:
Only as illustrations.
Haha.

A. Neumaier said:
Whenever you switch on your computer you are doing something discrete which is microscopically continuous.
I think you got this backwards.

A. Neumaier said:
Yes, but nobody has ever seen a neutron discontinuously turn into a proton.
You sound like Ernst Mach commenting on atoms.
 
  • #226
martinbn said:
I think this is a very misleading example. The development of general relativity was a result of Einstein solving physics motivated problems with hardcore mathematics not philosophy. The philosophy part, like the hole argument, actually slowed him down. Only when he was able to shrug off the philosophy he made progress.
Another post I just don't get. Again, as I said, the motivation for Einstein to develop GR was not a mismatch between theory and empirical results. But from my point of view to call the hole argument "the philosophy part" is just silly. The hole argument was Einstein's encounter with the true meaning of gauge invariance applied to spacetime and its coordinates. It showed that physically it doesn't make sense to label spacetime points as events before the metric is introduced, something which intuitively is not trivial at all. In a time where gauge invariance was not well understood this was a deep conceptual issue of the theory. To call that "the philosophy part" is, in my view, wrong on many levels. It's physics, people. It's not "mere philosophy". We are talking about the precise meaning of the metric field, the meaning of when you can interpret spacetime points as events, and the application of gauge invariance to spacetime. How is that not physics but "the philosphy part"?

Of course, in retrospect, with hundreds of textbooks written on GR, it's easy to dismiss Einstein's hole argument and his departure from general covariance as silly. And yes, I also don't get why many philosophers of science still go on and on about the hole argument and talk about "manifold substantivalism" and what not. But to dismiss the hole argument for these reasons as "the philosophy part of GR" is, as I see it, a denial of both the physics and the history behind the development of General Relativity.

As an experiment during my PhD I tried a few times to expose string theory people to explicit physical examples of the hole argument. And you would be surprised by how many people were confused by it. I once heard a string theorist who later on worked at MIT say that he never truely understood the subtleties of the argument. Maybe I and many physicists with me are just not smart enough because we're impressed by the subtleties of this "mere philosophy", but I seriously consider the idea that many people just don't appreciate the deep physical thinking of Einsteins when he considered the nature of general covariance at that time. But what I'm quite convinced of is that it's categorically wrong to put it away as "the philosophy part".

I guess we're talking about the demarcation between physics and philosophy here, but to me the labeling of it as "the philosophy part" and the fact that it was merely something to shrugg of and go on sounds like an underappreciation of the deep conceptual structure of GR.
 
Last edited:
  • Like
Likes Simple question and gentzen
  • #227
WernerQH said:
I think you got this backwards.
Switch = toggling a binary state = discrete.
Continuous = treatment of the switching process by classical electrodynamics, which is continuous.

We see everywhere the Discrete appear as a simplified description of the Continuous.

In mainstream physics, time is always continuous. Heaviside jumps are incompatible with relativistic quantum field theory. To do something discrete you therefore need to explain what happens to the neutron in the tiny interval where it stops being a neutron and before it is a proton.
 
Last edited:
  • Like
Likes Simple question, gentzen and dextercioby
  • #228
vanhees71 said:
What do you mean by "deterministic"? ##g-2## of electrons or muons is a parameter that can be predicted by the standard model after defining all the coupling constants of this model and as such can be tested by measuring it in experiment. The measurement of the magnetic moment of particles is as "probabilistic" as it is for any other observable in QT.

The Planck distribution of course is calculated by using Born's rule. How else do you want to interpret the statistical equilibrium operator you calculate for free photons to obtain this result?
An electron is a magnetic-moment eigenstate, so the value of magnetic moment is not uncertain. That's what I mean whey I say that it's deterministic, rather than probabilistic.

Planck predicted the Planck distribution 25 years before Born postulated the Born rule. Planck derived it from classical probabilistic reasoning, combined with the hypothesis that energy (associated with given frequency) can take only discrete values. Sure, one can also derive the Planck distribution from the modern Born rule for general mixed states, but my point is that the Planck distribution can be derived even without it.
 
  • Like
Likes Simple question
  • #229
haushofer said:
but I seriously consider the idea that many people just don't appreciate the deep physical thinking of Einsteins when he considered the nature of general covariance at that time. But what I'm quite convinced of is that it's categorically wrong to put it away as "the philosophy part".
I also see shades of grey here. The sort of "philosophy" relevant here, is philosophy of science and physics, which I see as rightfully belonging to the foundations of the same (not it's application though). I just don't see the obsession with drawing clear lines here. I think we can tell educated rational reasning from random crackpottery easy enough.

We are talking about the philosophical ponderings from the fathers of many modern theories in their process of formulating new hypothesis that later turned out well, it's not like we are talking about modern crackpots or the reasoning of Confucius or Nietzsche.

/Fredrik
 
  • Like
Likes haushofer, Simple question and gentzen
  • #230
Morbert said:
I just mean the problem of indeterminism in QM is a subjective one, as opposed to an objective problem like e.g. the recovery of the Born rule in the many-worlds interpretation.
I see, but the problem of explaining definite outcomes has not much to do with determinism. The problem is not to predict which outcome will realize; the realization of an outcome may well be random. The problem is to explain why single outcome (rather than all outcomes at once) realizes at all.

To explain the outcome one needs a variable that describes the outcome itself, rather than the probability of an outcome. This variable may evolve either deterministically or stochastically, but the problem is that standard QM does not contain such a variable at all. The standard QM contains variables that describe probabilities of outcomes, but it does not contain variables that describe random outcomes themselves. For example, it contains the probability amplitude ##\psi(x,t)##, but it does not contain ##x(t)##. How to explain that the particle has real position ##x## at time ##t##, if there is no real variable ##x(t)##?
 
  • Like
Likes Simple question and Fra
  • #231
Fra said:
This "intermediate" process is where QM doesn't work.
By not work I assume you mean does not provide information about individual events you expect or would like from a theory? Working with incomplete or finite statistics is the bane of any probabilistic theory and not itself an argument against QM, IMO. QM provides probability and the restriction of events to eigenvalues. Bell type experiments really seem to limit the nature of any hypothetical additional information a theory might provide. They certainly dim hopes that one might interpret their way out.
 
  • #232
Demystifier said:
I see, but the problem of explaining definite outcomes has not much to do with determinism. The problem is not to predict which outcome will realize; the realization of an outcome may well be random. The problem is to explain why single outcome (rather than all outcomes at once) realizes at all.

To explain the outcome one needs a variable that describes the outcome itself, rather than the probability of an outcome. This variable may evolve either deterministically or stochastically, but the problem is that standard QM does not contain such a variable at all. The standard QM contains variables that describe probabilities of outcomes, but it does not contain variables that describe random outcomes themselves. For example, it contains the probability amplitude ##\psi(x,t)##, but it does not contain ##x(t)##. How to explain that the particle has real position ##x## at time ##t##, if there is no real variable ##x(t)##?
But a theory should discribe the world the way it is, not the way we wish it were. What if it is not possible to have such variables.
 
  • #233
martinbn said:
But a theory should discribe the world the way it is, not the way we wish it were. What if it is not possible to have such variables.
I agree. But the association I made was that such place holder variables likely do exist, but its their influence on causality that does not work "the way some wish". And these placeholder might not even have the propery of been "observables" in the technical sense.

Bell theorem only forbids a specific type of hidden variables. Those that are objective.

I do not share bohmian ideas but on this point I see sound ogical possibilities. Its the idea that some variables are subjective and only some of these form equivalence classes to restore objectivity.

/Fredrik
 
  • #234
A. Neumaier said:
According to the explanation of event given in Wikiedia, it means that all events occur in every experiment (if all probabilities are nonzero). This is surely not the case in physical experiments. Thus either the notion of events or the notion of experiment (as used in Wikipedia) is physically irrelevant. Therefore your argument is spurious.
Demystifier said:
I see, but the problem of explaining definite outcomes has not much to do with determinism. The problem is not to predict which outcome will realize; the realization of an outcome may well be random. The problem is to explain why single outcome (rather than all outcomes at once) realizes at all.

To explain the outcome one needs a variable that describes the outcome itself, rather than the probability of an outcome.
Given a sample space of possible outcomes ##\{o_i\}## of an experiment involving measured system ##s## and detector array ##D##, the probability of an outcome occurring in a given run is $$p(o_i) = \mathrm{tr}_{sD}(\Pi_i(t)\rho_s\otimes\rho_D)$$The probability of an event like ##o_i\lor o_j## occurring is $$p(o_i\lor o_j)=\mathrm{tr}_{sD}([\Pi_i(t)+\Pi_j(t)]\rho_s\otimes\rho_D)$$The probability of all outcomes occurring at once in a given run is $$p(o_1\land o_2\land\dots\land o_N) = \mathrm{tr}_{sD}(\Pi_1(t)\Pi_2(t)...\Pi_N(t)\rho_s\otimes\rho_D) = 0$$Where am I going wrong?
 
Last edited:
  • #235
Morbert said:
Where am I going wrong?
You represent outcomes with projectors. For example, in the case of particle position, the projector would be something like ##|x\rangle\langle x|##. That's enough for computing the probability of position ##x##. However, the formalism you outlined talks about probabilities of position, not about position itself. A formalism that talks about position itself should have a real-valued variable ##x##, and the formalism you outlined does not have such a variable. (The operator ##\hat{x}## would not count because it's a hermitian operator, not a real-valued variable.)

Compare this formalism with classical stochastic mechanics. There one has a function ##x(t)##, which is a stochastic (not deterministic) function of ##t##. Such a quantity is missing in the quantum formalism.
 
  • #236
martinbn said:
What if it is not possible to have such variables.
Sure, it is a logical possibility, but I do not see any good reason to believe it. I think many physicists believe it only because they wish the standard quantum formalism to be the complete theory that does not need any additions. They want to believe that they understand everything that can be understood, so if there is something that they don't understand it's because it cannot be understood at all. Such an attitude is too dogmatic and anti-scientific for my taste. But as I said, it is a legitimate and logically consistent point of view.
 
Last edited:
  • Like
Likes physika, OCR, Simple question and 1 other person
  • #237
Demystifier said:
An electron is a magnetic-moment eigenstate, so the value of magnetic moment is not uncertain. That's what I mean whey I say that it's deterministic, rather than probabilistic.

Planck predicted the Planck distribution 25 years before Born postulated the Born rule. Planck derived it from classical probabilistic reasoning, combined with the hypothesis that energy (associated with given frequency) can take only discrete values. Sure, one can also derive the Planck distribution from the modern Born rule for general mixed states, but my point is that the Planck distribution can be derived even without it.
The magnetic moment of the electron is proportional to its spin and as such of course an observable as any other and thus behaves probabilistic. In fact the magnetic moment is what's measured in the Stern Gerlach experiment.

I know, how Planck derived the black-body spectrum. That's, however, not according to modern QED. In modern QED you evaluate the canonical equilibrium distribution of free photons, and to interpret the physics of the result of course you use Born's rule.
 
  • Like
Likes dextercioby
  • #238
vanhees71 said:
The magnetic moment of the electron is proportional to its spin and as such of course an observable as any other and thus behaves probabilistic. In fact the magnetic moment is what's measured in the Stern Gerlach experiment.
The absolute value of spin of electron is always ##1/2##, there is no any uncertainty about this, so the prediction that it is ##1/2## is deterministic.
 
  • #239
Fra said:
I agree. But the association I made was that such place holder variables likely do exist, but its their influence on causality that does not work "the way some wish". And these placeholder might not even have the propery of been "observables" in the technical sense.

Bell theorem only forbids a specific type of hidden variables. Those that are objective.

I do not share bohmian ideas but on this point I see sound ogical possibilities. Its the idea that some variables are subjective and only some of these form equivalence classes to restore objectivity.

/Fredrik
What's a subjective variable?
 
  • #240
Hornbein said:
What's a subjective variable?
What i meant was, it's "measured" and encoded by individual observers and thus "real" for that observer. But this variable cant med copied or cloned like a classical variable. And the observera subjective "measurement" does not qualify as a measurement of an observable as per qm or qft. So its a form of non-realism.

Demystifiers alternarive interpretation of bohimn HV, is also similar I think. https://arxiv.org/abs/1112.2034

Apart from that coincidence, i am not talking about the bell type hv of course.

/Fredrik
 
  • #241
Demystifier said:
You represent outcomes with projectors. For example, in the case of particle position, the projector would be something like ##|x\rangle\langle x|##. That's enough for computing the probability of position ##x##. However, the formalism you outlined talks about probabilities of position, not about position itself. A formalism that talks about position itself should have a real-valued variable ##x##, and the formalism you outlined does not have such a variable. (The operator ##\hat{x}## would not count because it's a hermitian operator, not a real-valued variable.)

Compare this formalism with classical stochastic mechanics. There one has a function ##x(t)##, which is a stochastic (not deterministic) function of ##t##. Such a quantity is missing in the quantum formalism.
Where we disagree is I don't think such a variable is needed "to explain why a single outcome (rather than all outcomes at once) realizes at all". We can show that, when presented with a sample space of experimental outcomes, QM rules out the possibility that all outcomes (or even more than one) will occur at once, since the probability is 0, as shown in my last post. Similarly, we can show that "no outcome occurs" also has a probability ##p(\varnothing) = \mathrm{tr}_{sD}([I_{sD}-\sum_i\Pi_i(t)] \rho_s\otimes\rho_D) = 0##.

I.e. We can interpret QM as returning probabilities for possible outcomes, without a variable corresponding to the "true outcome", and this interpretation won't suffer from problems like implying all outcomes might occur at once.
 
Last edited:
  • #242
Morbert said:
Where we disagree is I don't think such a variable is needed "to explain why a single outcome (rather than all outcomes at once) realizes at all".
A 5 year child asks: Mommy and daddy, why there is no Sun during the night?
Daddy: Because during the night the probability of seeing Sun is zero.
Mommy: Because Earth is round and during the night the Sun is on the other side.

Both explanations are true, but which is better?
 
  • Like
  • Love
Likes lodbrok, DrChinese and Simple question
  • #243
Demystifier said:
A 5 year child asks: Mommy and daddy, why there is no Sun during the night?
Daddy: Because during the night the probability of seeing Sun is zero.
Mommy: Because Earth is round and during the night the Sun is on the other side.

Both explanations are true, but which is better?
This is misleading. How about a 5 year old wants to have a sibling. The parents say sure. The child ask: where is it now? Parent one: it doesn't exist yet. Parent two: there are exact values of the position variables, we just don't know them.
 
  • Haha
Likes WernerQH
  • #244
martinbn said:
Parent two: there are exact values of the position variables, we just don't know them.
The point would i think be that its not just we that doesnt know, Noone or no thing, has been able to learn them, which is why we do not obet bell inequality. It might well be "lost in chaos" and thus it decouples from any inference.

I think the difference betwteen a subatomic variable and the sun, is that it would take nothing less than a black hole to scramble the information of where the sun went for all the environment. Putting a blindfold on daddy at night also gives zero probability of seeing rhe sun but that isnt the mechanism.

/Fredrik
 
  • #245
Demystifier said:
A 5 year child asks: Mommy and daddy, why there is no Sun during the night?
Daddy: Because during the night the probability of seeing Sun is zero.
Mommy: Because Earth is round and during the night the Sun is on the other side.

Both explanations are true, but which is better?
I think this is where the subjectivity comes into play. Everyone would agree that for macroscopic matters like the the solar system, explanations like the 2nd are better. When it comes to microscopic matters, many people are happy to frame QM as characterising microscopic systems in terms of macroscopic tests and responses, without grounding it in some primitive ontology.

At the same time, this doesn't mean the notion of explanation is entirely surrendered. E.g. The Dad might instead explain why there is no sun at night by talking about the way in which the solar system is "prepared" and the dynamics it obeys.
 
  • Like
Likes LittleSchwinger
  • #246
Morbert said:
At the same time, this doesn't mean the notion of explanation is entirely surrendered. E.g. The Dad might instead explain why there is no sun at night by talking about the way in which the solar system is "prepared" and the dynamics it obeys.
Except dad cant "control" or prepare a source of suns. The sun (or rather the earth) goes where it wants. Its exactly here we need a bridge in the two views.

/Fredrik
 
  • #247
Morbert said:
When it comes to microscopic matters, many people are happy to frame QM as characterising microscopic systems in terms of macroscopic tests and responses, without grounding it in some primitive ontology.
Well summarized!
 
  • #248
martinbn said:
This is misleading. How about a 5 year old wants to have a sibling. The parents say sure. The child ask: where is it now? Parent one: it doesn't exist yet. Parent two: there are exact values of the position variables, we just don't know them.
Parent 3 (it's a modern family involving more than two parents): It doesn't exist yet, it will be made out of food that mommy eats, very much like you make a tower out of sand.
 
  • #249
Morbert said:
Given a sample space of possible outcomes ##\{o_i\}## of an experiment involving measured system ##s## and detector array ##D##, the probability of an outcome occurring in a given run is $$p(o_i) = \mathrm{tr}_{sD}(\Pi_i(t)\rho_s\otimes\rho_D)$$The probability of an event like ##o_i\lor o_j## occurring is $$p(o_i\lor o_j)=\mathrm{tr}_{sD}([\Pi_i(t)+\Pi_j(t)]\rho_s\otimes\rho_D)$$The probability of all outcomes occurring at once in a given run is $$p(o_1\land o_2\land\dots\land o_N) = \mathrm{tr}_{sD}(\Pi_1(t)\Pi_2(t)...\Pi_N(t)\rho_s\otimes\rho_D) = 0$$Where am I going wrong?
This is fine. But
  • it does not conform to the definitions in Wikipedia that you cited (which is quite sloppy, so this is the minor problem).
  • Your events are still classical, i.e., outside the framework of quantum mechanics. They are added to the quantum formalism in an ad hoc manner, without any rules for identifying their meaning.
What does it mean in quantum terms for a detector to produce an event? To give your POVM a meaning you need to refer to the classical description of the experiment done to identify the projectors with real events. This is what I mean with classical.
 
Last edited:
  • Like
Likes mattt
  • #250
Demystifier said:
Parent 3 (it's a modern family involving more than two parents): It doesn't exist yet, it will be made out of food that mommy eats, very much like you make a tower out of sand.
But your need for hidden variables is like parent 2.
 

Similar threads

Replies
4
Views
116
Replies
39
Views
2K
Replies
3
Views
1K
Replies
42
Views
8K
Replies
6
Views
2K
Back
Top