Nobody understands quantum physics?

  • #101
A. Neumaier said:
This does not answer the query. Prepared is a superposition without definite values of ##S_x##. But measured is one of the values ##\pm1##, let us say ##+1##. The question is when, in a quantum description of the detector, the definite value ##+1## is obtained.

To quote the Schwinger essay mentioned by @LittleSchwinger:

"Therefore, the mathematical scheme for microscopic measurements can certainly not be the representation of physical properties by numbers. [...] we must instead look for a new mathematical scheme in which the order of performing physical operations is represented by an order of performance of mathematical operations. The mathematical scheme that was finally found to be necessary and successful is the representation, in a very abstract way, of physical properties not by numbers but by elements of an algebra for which the sense of multiplication matters."

"If you know the state, you can then predict what the result of repeated trials of measurement of a particular physical property will be. You will have perfectly determinate, statistical predictions but no longer individual predictions."

"The knowledge of the state does not imply a detailed knowledge of every physical property but merely, in general, of what the average or statistical behavior of physical properties may be."

I read this to mean:

i) We should be careful not to attribute a property like ##S_x = +1## to the object of measurement. ##+1## would only be attributed to the classical datum post-measurement. If we want to speak about properties of the object of measurement, we would use a representation like ##\Pi_{S_x=+1}##. This more robust representation frees us of worrying about when a particular property does or doesn't obtain. Only the ordering in consideration of measurement operations is important.

ii) We do not have to associate the preparation of the object of measurement with the moment a microscopic property obtains. The quantum state is not an assertion of what properties the system has at any given time. It is only an assertion of what future statistics can be expected.

[edit] - Tidied up a bit
 
Last edited:
  • Like
Likes LittleSchwinger
Physics news on Phys.org
  • #102
Morbert said:
To quote the Schwinger essay mentioned by @LittleSchwinger:

"Therefore, the mathematical scheme for microscopic measurements can certainly not be the representation of physical properties by numbers. [...] we must instead look for a new mathematical scheme in which the order of performing physical operations is represented by an order of performance of mathematical operations. The mathematical scheme that was finally found to be necessary and successful is the representation, in a very abstract way, of physical properties not by numbers but by elements of an algebra for which the sense of multiplication matters."

"If you know the state, you can then predict what the result of repeated trials of measurement of a particular physical property will be. You will have perfectly determinate, statistical predictions but no longer individual predictions."

"The knowledge of the state does not imply a detailed knowledge of every physical property but merely, in general, of what the average or statistical behavior of physical properties may be."
That's the most precise no-nonsense statement I can think of. Indeed, this introductory chapter of Schwinger's textbook is a must-read for anybody interested in the interpretational issues of QT.
Morbert said:
I read this to mean:

i) We should be careful not to attribute a property like ##S_x = +1## to the object of measurement. ##+1## would only be attributed to the classical datum post-measurement. If we want to speak about properties of the object of measurement, we would use the representation ##\Pi_{S_x=+1}##. This more robust representation frees us of worrying about when a particular property does or doesn't obtain. Only the ordering in consideration of measurement operations is important.
Yes, the state implies the probabilities for the outcome of any measurement you can do on the prepared system. If the state is such that the outcome for an observable like ##P(s_x=+\hbar/2)=1##, then this observable takes a determined values ##\hbar/2##, otherwise it's value is indetermined, and only the probalities for either possible outcome is given by the preparation in that state.
Morbert said:
ii) We do not have to associate the preparation of the object of measurement (e.g. by blocking the other beam) with the moment a microscopic property obtains. The quantum state is not an assertion of what properties the system has at any given time. It is only an assertion of what future statistics can be expected.
The state represents a preparation procedure. It predicts probablistic and only probabilistic properties about the outcome of future measurements.
 
  • Like
Likes LittleSchwinger
  • #103
vanhees71 said:
It's just by filtering. Take the Stern-Gerlach experiment which can be designed as an almost perfect filter measurement, i.e., you use an inhomogeneous magnetic field with the right properties to split the beam in two spatially well separted parts, of which you know (from unitary quantum dynamics by the way) that position and spin-component (which one is selected by the direction of the large homogeneous part of the magnetic field) are almost perfectly entangled, i.e., blocking one partial beam prepares a beam with determined spin component. It's clear that you can prepare only one spin component and not more, i.e., the spin components in other directions don't take determined values, and the so prepared quantum state implies the and only the probabilities for the results of measurements of any spin component. Measuring the prepared spin component gives with 100% probability the prepared value, i.e., this and only this spin component takes a determined value. The preparation is obtained after the beams got sufficiently well separated due to the (unitary) dynamics of the particles moving in the magnetic field.
Remember that the signal produced in the original Stern Gerlach experiment consisted of two well separated clusters of points on a screen, not a single pair. The experiment strongly hints at the existence of a spin degree of freedom that is entangled, but it also hints at the existence of a number of other configurational degrees of freedom associated with the wave function that ultimately must somehow collapse to a point with each trial in order for any experimental data to be obtained at all. It's possible that similar (uncontrolled) filtering mechanisms are at work in separating each measurement outcome, but verifying that requires performing a second experiment (trying to 'dissect' the wave function collapse process, to determine each filtering mechanism in turn.)

At a certain point, you need to explain why the Hamiltonian of a system is definite and not itself described by a wave function or density operator. It could be that the Hamiltonian is a macroscopic statistical average that emerges from a random ensemble of definite (observed) events, or that it is somehow connected to the ability of the experimenter to both modulate a system and 'read' its character, or something else entirely, but quantum mechanics in its traditional formulation makes it very difficult to tell which of these perspectives if any is in fact closest to the truth.
 
  • Like
Likes physika
  • #104
Couchyam said:
At a certain point, you need to explain why the Hamiltonian of a system is definite
I'm not sure what you mean by "definite". The Hamiltonian is an operator.
 
  • Like
Likes vanhees71
  • #105
PeterDonis said:
I'm not sure what you mean by "definite". The Hamiltonian is an operator.
What I mean is that the Hamiltonian is (conventionally understood as) a 'definite operator', as opposed to an operator-valued random number generator, or a wave function over a space of operators (or part of some larger wave function of the universe.) It might change over time, and it might be impossible to measure its components exactly, but at every instant it has (in principle) a well-defined value. If a formulation of quantum mechanics existed in which the Hamiltonian wasn't definite, the authors of the theory would need to explain the exact nature of its indefiniteness very carefully (possibly by appealing to a more fundamental mechanism for time evolution.) There may be some way for a Hamiltonian to 'emerge' from a wave function that either lacked inherent dynamics or was described by some completely strange-looking but unitary 'super-Hamiltonian', but that would mark a departure from the conventional picture.
 
Last edited:
  • Like
Likes Fra
  • #106
vanhees71 said:
After the magnet the spin component in direction of the field is (almost completely) entangled with position, i.e., in each of the two partial beams "selected" by the magnet you have a well-prepared spin component.
I do not understand the term "after the magnet" in the above. I believe this goes to the fundamental disagreement in much of this discussion. Is this temporal? Spatial?
 
  • #107
hutchphd said:
I do not understand the term "after the magnet" in the above. I believe this goes to the fundamental disagreement in much of this discussion. Is this temporal? Spatial?
I guess you could say that conditional on the particle having been measured (at some point away from the magnet, after it has been emitted by whatever source is used), it must have been influenced by the magnetic field to some extent (barring somewhat contrived neutral Aharonov-Bohm field configurations.) Speaking of the Aharonov-Bohm effect and spatial/temporal ambiguities...
 
  • #108
hutchphd said:
I do not understand the term "after the magnet" in the above. I believe this goes to the fundamental disagreement in much of this discussion. Is this temporal? Spatial?
Is there a disagreement (in this thread)? Between whom? I guess "after the magnet" is Spatial. Probably near the hole in some imaginary aperture, which blocks the unwanted part of the particle beam.
 
  • #109
Couchyam said:
At a certain point, you need to explain why the Hamiltonian of a system is definite and not itself described by a wave function or density operator. It could be that the Hamiltonian is a macroscopic statistical average that emerges from a random ensemble of definite (observed) events, or that it is somehow connected to the ability of the experimenter to both modulate a system and 'read' its character, or something else entirely, but quantum mechanics in its traditional formulation makes it very difficult to tell which of these perspectives if any is in fact closest to the truth.
My answer would be that preparation does not just determine the initial state, but also the Hamiltonian.

And in general, I guess that the failure to distinguish between preparation and measurement is responsible for some of the confusion with QM and its interpretation. Using measurement to emulate preparation seems so convenient and straightforward, just like an additional quantum symmetry. But you risk a totally unnecessary circularity in this way.
 
  • Like
Likes dextercioby
  • #110
Couchyam said:
At a certain point, you need to explain why the Hamiltonian of a system is definite
gentzen said:
My answer would be that preparation does not just determine the initial state, but also the Hamiltonian.

And in general, I guess that the failure to distinguish between preparation and measurement is responsible for some of the confusion with QM and its interpretation. Using measurement to emulate preparation seems so convenient and straightforward, just like an additional quantum symmetry. But you risk a totally unnecessary circularity in this way.
This illustrates the problem of the paradigm where one has a timeless evolution law (represented by hamiltonian flow), that works on timeless statespaces.

The problem from the perspective of inference is that, ANY "input" counts. "Specfiying the hamiltonian" is no less "input" than is "speciying the initial state". The difference is though that the hamiltonian is given much more weight (by construction). But it is pretty obvious that any process tomographic process can not with perfect confidence infer a timeless hamiltonian, how could it? I think effectively the inference limited by a certain capacity implies an energy cutoff (or large time limit), that I interpret as physically related to the agents complexity. Perhaps one can also interpret this as a decoupling between agents that fails to decode each other, due to running out of processing time on the dynamical scale.

(As I see the solution to this, it does seem circular at first, but i would say it can be evolutionary. Ie its no more circular than the relation between how the dynamics IN spacetime, changes the dynamics OF spacetime - but applie to a more abstract "information space", that involves all information, not just positions. I see it as a possible feedback loop of learning. this is very different than "circular reasoning")

What happens is when you think this is too complicated, you can truncate or "freeze" the process from a given observer, and the effective theory - frozen - looks like a normal hamiltonian paradigm. It happens when you say that, ignore some marginal uncertainty and consider anything that is "sufficiently certain" as completely certain.

/Fredrik
 
  • #111
gentzen said:
But you risk a totally unnecessary circularity in this way.
I agree, but there is a possible benefit as well. To connect two different effective theories or allow for emergent hamiltonian as part of the physics. And I think this is sort of what one needs in the quest for unification. But it certainly makes it more complicated with a feedback. This is already why GR is quite complicated, we have problems of time etc.

/Fredrik
 
  • #112
Fra said:
I agree, but there is a possible benefit as well. To connect two different effective theories or allow for emergent hamiltonian as part of the physics. And I think this is sort of what one needs in the quest for unification.
Well, the quest for unification is not my quest. I guess my quest is just to be able to communicate (about physics), without too much appeal to authority.

I sort of get why preparation and measurement are closely related. For example, if I prepare atoms by shielding them and waiting long enough until they nearly all relaxed to their ground state, then I know that they are in their ground state. And because I know it, I can claim that I somehow measured it, because what else is measurement than knowing some specific properties. But ... I would prefer to measure properties which were there before I measured, and prepare states which will be there after I prepared.
 
  • #113
gentzen said:
My answer would be that preparation does not just determine the initial state, but also the Hamiltonian.

And in general, I guess that the failure to distinguish between preparation and measurement is responsible for some of the confusion with QM and its interpretation. Using measurement to emulate preparation seems so convenient and straightforward, just like an additional quantum symmetry. But you risk a totally unnecessary circularity in this way.
@gentzen: I'm not entirely sure if I understand the points you are making (about the Hamiltonian being determined by 'preparation', or the dichotomy between 'preparation' and 'measurement', or the nature of the 'circularity' that is risked by conflating the two.) Is the idea that it is hard to derive scientific meaning (or evaluate ethics) in experiments where there isn't a clear dichotomy between preparation and measurement, or are you saying something about the fundamental nature of Hamiltonians (to the extent that Hamiltonians are fundamental?) Consider for example an experiment in which, say half-way through, a completely random earthquake starts, jostling the apparatus: would the data be fundamentally useless because they weren't produced with the (intended) 'prepared' Hamiltonian, or could some kind of meaning be rescued at the end of the day if the bumps were measured precisely enough? (Would you interpret the 'bump measuring' apparatus as another necessary part of the preparation?)

@Fra: could you explain to me what you mean by a 'timeless Hamiltonian'? I would also much appreciate it if you could discuss what you meant by 'ANY "input" counts', or what it would mean to "freeze [a] process from a given observer".
 
  • #114
Couchyam said:
dichotomy between 'preparation' and 'measurement', or the nature of the 'circularity' that is risked by conflating the two
I don't think there is a dichotomy, or that the two are conflated. I think that measurement is presented as the special operation, and preparation is either ignored, or emulated by measurement.

I don't intent to talk about something complicated here. Just plain simple, if you prepare your experiment, call it preparation. Who cares whether you can make some interpretational dance and interpret it as a sort of measurement?

The circularity is also something very simple. Measurement has to store/register the new information somewhere. If you allow yourself the possibility to prepare a sufficient number of qubits in some well defined state (typically some ground state), then storing the new information there (even redundantly) is easy. But if you want to use measurement for preparing those qubits in a well defined state, then ... you risk to entangle yourself in circularity.

Couchyam said:
Is the idea that it is hard to derive scientific meaning (or evaluate ethics) in experiments where there isn't a clear dichotomy between preparation and measurement, or are you saying something about the fundamental nature of Hamiltonians (to the extent that Hamiltonians are fundamental?)
No, not at all. The idea is that preparation is (often) really simple, even in cases where it is a combination between knowing and ensuring certain things. Measurement is (often) more tricky.
 
  • #115
I don't want to discuss too deeply into this, as that is impossible without getting offside...
Couchyam said:
@Fra: could you explain to me what you mean by a 'timeless Hamiltonian'? I would also much appreciate it if you could discuss what you meant by 'ANY "input" counts', or what it would mean to "freeze [a] process from a given observer".
The terms I used was not very formal, but

1) by timeless hamiltonian I essentially mean that the "laws of evolution" (which is often encoded as a hamiltonian) are fixed, non-dynamical and considered to be what they are becuase it's how nature is. This gives the paradigm that the initial conditions implies the future.

The opposite of this (which i prefer) implies that one should treat initial conditions and laws on more equal footing. See for example

Unification of the state with the dynamical law​

"We address the question of why particular laws were selected for the universe, by proposing a mechanism for laws to evolve. Normally in physical theories, timeless laws act on time-evolving states. We propose that this is an approximation, good on time scales shorter than cosmological scales, beyond which laws and states are merged into a single entity that evolves in time. Furthermore the approximate distinction between laws and states, when it does emerge, is dependent on the initial conditions. These ideas are illustrated in a simple matrix model. "
-- https://arxiv.org/abs/1201.2632

I would not bother with the explicit model in that paper(which I think is too simple), I think the important thing is the idea. He also dedicated books to argue. The purpose of the books are as I see it not to present the explicit model that solves this (this is an open issue), but the main objetive is to change the way many physicists thing of this. https://www.amazon.com/dp/1107074061/?tag=pfamazon01-20

2) But "any input" I mean any information that an agent makes inferences upon, makes use of both implicit and explicit informaiton, and it comes if different forms. The most obvious, explicit and most adjustable information can be encoded in a STATE. There are alot of background information, that we can fool ourselves with beeing just "mathematics", but I think it clearly biases our inferences (and any agents inferences), and this is not acceptable for me. The background information usually is chosen as well, but it's slower process. The LAW what deduces the future from the past (in the typical paradigm of a closeod system) is a very qualified piece of information. Where does this come from? A purist view of inference would expect this to follow from inference as well.

3) By freeze I meant effectively a perturbative approach, where you take any existing state of hte observer as fixed, and perturb from there, it is quite obvious that the differential state is going to be simpler and more linear mathematics, just like you can taylor expand any function. So the "present" becomes the "background". But on larger time scales the background must evolve somehow. Until we understand this batter, we can simply say we have a different "effective theory" at any point in this abstract space. But it's the relation and how they flow into each other as part of physical interactions (NOT just flowing into each other on the theorists noteblock) that I find the challenge to understand.

/Fredrik
 
  • Like
Likes gentzen
  • #116
hutchphd said:
I do not understand the term "after the magnet" in the above. I believe this goes to the fundamental disagreement in much of this discussion. Is this temporal? Spatial?
Both. You have an Ag atom moving through the magnet in a finite time, and behind the magnet you have an entanglement between position and spin component, i.e., when selecting only Ag atoms in one of the corresponding spatial regions you find them to have a definite spin component ("up" or "down", depending on which region you choose).
 
  • #117
Couchyam said:
What I mean is that the Hamiltonian is (conventionally understood as) a 'definite operator', as opposed to an operator-valued random number generator, or a wave function over a space of operators (or part of some larger wave function of the universe.) It might change over time, and it might be impossible to measure its components exactly, but at every instant it has (in principle) a well-defined value. If a formulation of quantum mechanics existed in which the Hamiltonian wasn't definite, the authors of the theory would need to explain the exact nature of its indefiniteness very carefully (possibly by appealing to a more fundamental mechanism for time evolution.) There may be some way for a Hamiltonian to 'emerge' from a wave function that either lacked inherent dynamics or was described by some completely strange-looking but unitary 'super-Hamiltonian', but that would mark a departure from the conventional picture.
What's deterministic in quantum dynamics is the evolution of the probabilities, and that's indeed described by the Hamiltonian. Concerning the observables it's indeed in a sense a random-number generator (as far as we know a perfect one, i.e., it's not somehow deterministic in a hidden way). The Hamiltonian in standard QT doesn't "emerge from a wave function" but is given for the system under investigation to determine the time evolution of the wave function from the initial wave function (given by the "preparation of the system"). In this sentence you can everywhere write "statistical operator" instead of "wave function" to cover the most general case of states.
 
  • Like
Likes DrChinese and LittleSchwinger
  • #118
Couchyam said:
What I mean is that the Hamiltonian is (conventionally understood as) a 'definite operator', as opposed to an operator-valued random number generator
Such a concept wouldn't make much sense.

In QM you have your algebra of observables and then per Gleason's theorem (or Busch's if you take POVMs) quantum states, i.e. statistical operators, can be derived as probability assignments to the observables.

Thus they do take values probabilistically, but having the operator itself be random wouldn't make much physical sense. In a lab we know if we are measuring ##S_{x}## or ##S_{z}##, based on the orientation of the Stern-Gerlach magnets for example, that doesn't fluctuate.
 
  • Like
Likes dextercioby and PeterDonis
  • #119
hutchphd said:
I do not understand the term "after the magnet" in the above. I believe this goes to the fundamental disagreement in much of this discussion. Is this temporal? Spatial?
vanhees71 said:
Both. You have an Ag atom moving through the magnet in a finite time,
But in this specific experiment, you have no control over when an Ag atom reaches the magnet (and can't measure it either), so I think that for this specific experimental arrangement, "Spatial" is the better answer. There could be similar experiments where control and knowledge about time is relevant. But one lesson from QT is that you have to focus on your specific actually performed experiment, and not on some other experiment which you could have performed instead.

vanhees71 said:
and behind the magnet you have an entanglement between position and spin component, i.e., when selecting only Ag atoms in one of the corresponding spatial regions you find them to have a definite spin component ("up" or "down", depending on which region you choose).
You see, in this specific experiment, you choose a spatial region. That is my argument for why "Spatial" should be the answer.
 
  • Like
Likes vanhees71
  • #120
LittleSchwinger said:
Thus they do take values probabilistically, but having the operator itself be random wouldn't make much physical sense.
Bell-tests are often arranged in a way that depending on some random element, something macroscopically different is done. This would be reflected in a time dependent Hamiltonian, if the used quantum description were "sufficiently complete". But since the outcome of this random element will be known in the analysis of the measurement results, it is unclear whether saying that the Hamiltonian was random makes physical sense.
 
  • Like
Likes vanhees71 and LittleSchwinger
  • #121
gentzen said:
Bell-tests are often arranged in a way that depending on some random element, something macroscopically different is done. This would be reflected in a time dependent Hamiltonian, if the used quantum description were "sufficiently complete". But since the outcome of this random element will be known in the analysis of the measurement results, it is unclear whether saying that the Hamiltonian was random makes physical sense.
Yeah that's true actually. You have similar measurement controlled gates in quantum computing. As you mentioned you never need to model them with randomised Hamiltonians. Such randomised Hamiltonians could always be absorbed into a CPTP map, just as randomised PVMs can just be represented as POVMs.
 
  • Like
Likes vanhees71 and gentzen
  • #122
LittleSchwinger said:
Such randomised Hamiltonians could always be absorbed into a CPTP map, just as randomised PVMs can just be represented as POVMs.
This gets fuzzy now but here I see a big problem. This "solution" of requires more complexity for representation and computation. If you rather are working from the perspective of a given generaliszed agent/observer i think there must be an effective cutoff due to information complexity which simply forbids going higher. I think this is not a solution that satisfies me becausa its like inventing an algorithm to solve a problem but no actual computer can execute it in relevant time scales. Such a solution, is no real solution.

/Fredrik
 
  • #123
It's pretty routine in Quantum Information to write down POVMs and CPTP maps, I'm not sure what you mean.
 
  • Like
Likes vanhees71
  • #124
vanhees71 said:
Where does the fundamental formulation of QT depend on classical physics?
WernerQH said:
Can you refer to a formulation of quantum theory that does not use the term "measurement"? And doesn't measurement require classical apparatus?

@vanhees71, what's your answer?
 
  • #125
LittleSchwinger said:
It's pretty routine in Quantum Information to write down POVMs and CPTP maps, I'm not sure what you mean.
Yes I know, it was reflecting upon that from a foundational perspective with unification of forces in mind. If we stick to effective theories, there is not problem at all with your suggestion.

/Fredrik
 
  • #126
WernerQH said:
@vanhees71, what's your answer?
My answer is that I don't see, where the problem is. The classical behavior of macroscopic matter, including matter used for measurements, can be understood from quantum many-body theory. On the other hand all experiments so far don't show any hint that macroscopic bodies don't show "quantum behavior" if I can prepare them in a state, where I can observe it, and indeed even the LIGO mirrors show quantum behavior, because one can measure their motion accurately enough to resolve it, btw. using again quantum effects to achieve this (squeezed states of light).
 
  • Like
Likes LittleSchwinger
  • #127
vanhees71 said:
My answer is that I don't see, where the problem is. The classical behavior of macroscopic matter, including matter used for measurements, can be understood from quantum many-body theory.
Shouldn't the teaching of quantum theory then start with QFT, as the basis? :wink:
Why does the formulation of quantum theory require such anthropocentric notions like "state preparation" and "measurement"?
https://www.physicsforums.com/insights/the-7-basic-rules-of-quantum-mechanics/

Do you consider it a waste of time (of "merely" philosophical interest) to search for a better formulation of the theory?
 
  • Like
Likes dextercioby
  • #128
WernerQH said:
Why does the formulation of quantum theory require such anthropocentric notions like "state preparation" and "measurement"?
Quantum Theory says that not all physical quantities take well-defined values at all times, interactions with atomic systems are inherently probabilistic and the acquisition of information fundamentally disturbs the system. The conjunction of all these facts as can be seen in results like the Kochen-Specker theorem means you just can't talk meaningfully about "the well-defined quantities that hold values at all times independent of preparation and measurement".

It was an idealisation of classical mechanics where information could be obtained without disturbing a system, determinism held and all physical values were well-defined that you could divorce the theory from preparation and measurement*.

*Of course even here practically you couldn't, since physics always involves preparing a system and gathering the statistics for measurements. It was simply consistent with the theory to imagine doing so in some idealised limit.
 
  • Like
Likes dextercioby and gentzen
  • #129
WernerQH said:
Shouldn't the teaching of quantum theory then start with QFT, as the basis? :wink:
Why does the formulation of quantum theory require such anthropocentric notions like "state preparation" and "measurement"?
https://www.physicsforums.com/insights/the-7-basic-rules-of-quantum-mechanics/

Do you consider it a waste of time (of "merely" philosophical interest) to search for a better formulation of the theory?
From a strictly deductive approach you should start from a theory of everything and then derive the phenomena from the appropriate approximations for the given situation. That's of course impractical to introduce physics students to the subject ;-)).

All of physics requires the anthropocentric notions like "state preparation" and "measurement", because physics has estabilished itself as a description of the phenomena in quantitative terms, i.e., you must define in some way, how to measure things, including the definition of units etc. Only then can you specify the "state of the system" at some initial time ("preparation") in a concise quantitative way and measure properties of the system in a concise quantitative way at a later time ("measurement of observables"), and this is not different in classical physics too. So quantum theory is not more anthropocentric than classical physics.
 
  • #130
WernerQH said:
Why does the formulation of quantum theory require such anthropocentric notions like "state preparation" and "measurement"?
For me, certain aspects of "measurement" are very hard (impossible?) to disentangle from anthropocentric notions. How can you formulate "no signaling" without reference to some human or agent like entity? And "randomness" is not much better, especially since it is closely related to "no signaling" in the context of QM.

For me, even so "state preparation" involves at least some knowledge about the state of some system, I don't worry too much about that being anthropocentric. The system will be in some state near the ground state anyway, whether some human or agent knows it or not (or even ensured it to a certain extent).
 
  • Like
Likes dextercioby
  • #131
vanhees71 said:
and this is not different in classical physics too. So quantum theory is not more anthropocentric than classical physics.
I think the confusion comes in because in Classical Physics you can imagine a limit of preparations that fix all future measurements. It's just like how Classical Probability allows you to imagine probability as ignorance of a "totally fine grained" state where all quantities are well-defined and there is no stochasticity.

I agree that practically Classical Physics is no different, but people tend not to think of Classical Mechanics in this practical way.
 
  • Like
Likes dextercioby, gentzen and vanhees71
  • #132
The reason is, that for classical mechanics (and also classical electrodynamics) nobody had the idea to have philsophical debates about them within the physics community. This sin was committed by Bohr, Heisenberg, and some other followers, and thus you have all this confusion about the most successful physical theory that has been discovered today. Another psychological phenomenon is that popular-science writers seem to think (maybe rightfully so) that their books sell better when claiming it's some mystery.

I'll never forget that, when I once went to a book shop (and it was also a university bookshop, not only a general one!) asking for quantum-theory books. The friendly clerk pointed right away to the "esoterics corner" and indeed, there was some pop-sci book about quantum mechanics (I think something like "The tao of physics"

https://en.wikipedia.org/wiki/The_Tao_of_Physics

Well, indeed, it was sorted in the right category, but it was definitely not, what I was looking for ;-).
 
  • Sad
Likes gentzen
  • #133
vanhees71 said:
The reason is, that for classical mechanics (and also classical electrodynamics) nobody had the idea to have philsophical debates about them within the physics community. This sin was committed by Bohr, Heisenberg, and some other followers, and thus you have all this confusion about the most successful physical theory that has been discovered today
Not that it matters, but historically speaking I wouldn't necessarily agree. Back in the 19th Century there were plenty of philosophical debates about electromagnetism, even Newtonian Mechanics and Gravity. Similarly for General Relativity with things like the "hole argument". And it was physicists themselves that were having these discussions.

I think it's that quantum theory violates our intuition more strongly, so these debates haven't died off as rapidly.
 
  • Like
Likes vanhees71 and gentzen
  • #134
Well, yes, the hole argument was part of Einstein's struggle with the meaning of general covariance during the 10 years of research to find the final version of GR.

Indeed you are right, one of the greatest obstacles for the acceptance of GR was the inability of philosophers to understand it. That's most probably, why ironically Einstein is one of the few Nobel laureat's whose Nobel certificate explicitly states the very subject he has NOT gotten the prize ;-). It's pretty clear that Bergson's influence is the reason for that.
 
  • Like
Likes LittleSchwinger
  • #135
There's also a shift in the attitude to mathematics. Go back to the 30s and 40s and a non-mathematical discursive explanation of a theory was considered primary. The mathematics was simply how one implemented these ideas for quantitative use. I was surprised to find out even somebody as mathematical as Dirac thought this way.

The idea that mathematics was the primary way to explain physics only started to become common later. This is why older authors tend to have, from our perspective, rambling non-mathematical essays to explain things.
 
Last edited:
  • Like
Likes vanhees71
  • #136
vanhees71 said:
All of physics requires the anthropocentric notions like "state preparation" and "measurement", because physics has estabilished itself as a description of the phenomena in quantitative terms, i.e., you must define in some way, how to measure things, including the definition of units etc. Only then can you specify the "state of the system" at some initial time ("preparation") in a concise quantitative way and measure properties of the system in a concise quantitative way at a later time ("measurement of observables"), and this is not different in classical physics too. So quantum theory is not more anthropocentric than classical physics.
For me, this is just rationalization. Without realizing it, like so many physicists you have fallen victim to Bohr's tranquilizing philosophy ("Beruhigungsphilosophie", as Einstein put it). It's probably pointless to continue the discussion, if you refuse to even consider the possibility of a deeper understanding of quantum theory. Bell's qualms about the theory proved to be remarkably fertile, even leading to a kind of quantum information "industry", as you yourself have admitted. In his essay "Against Measurement" Bell argued that the axioms of such a fundamental theory should be formulated without vague notions like "measurement" or "system". But if you believe there is no better way, then nothing will convince you.
 
  • #137
John Mcrain said:
Listen just 1 minute, what does it mean when he said nobody understand quantum mechanics?
This sound like comedy
I often differentiate between being able to do the math and having a good "feel" for what is going on.
As a programmer, I can take some formulae, code them up, test them, and be on my way without "understanding" anything about them.

More satisfying is to have a picture in my head about what is going on - a picture that guides me in broadening or extending the equations that I started with. With programming, it allows me to come up with more creative test cases. This picture probably comes from a combination of the genetics of my mentality and all those trivial experiments I performed with throwing balls, falling down, and playing with my food in my earliest years.

QM doesn't lend itself to that kind of "understanding". Particles do not move like play toys. When a ball is thrown, not only does it have a starting point and a ending point, but an entire trajectory. It is always somewhere - somewhere very specific. In QM, that is a failing model. Saying that the particle "travels" is simply borrowing an available term from our language. When the photon hits the screen in the double-slit experiment, it is as though the that photon has already tested the entire slitted barrier, treated it as a hologram, worked out the entire Fourier Transform to determine what its options are, then finally generated random values to supplement the constraints of Heisenberg Uncertainty.

We can know what it's doing. We can work through the numbers. We can fully simulate the process. But it's non-trivial to create a working mental picture that follows the process as closely as we do with a tossed ball.

Once you do get a good picture, it will not be reinforced by your daily experiences. Unless you are working with other Quantum Physicists, the language to describe it will not be exercised in your daily social dialogs.

And really, when it comes to QM experiments, by my measure, the double-slit experiment isn't the worse. If you move on to the Bell inequality, you should be able to fully understand the "paradox". But then to go beyond the arithmetic and "understand" the underlying process requires you to pick your favorite way to move information around in ways that you have innately trained yourself to take as impossible.
 
Last edited:
  • Like
Likes vanhees71, Simple question and WernerQH
  • #138
LittleSchwinger said:
Such a concept wouldn't make much sense.

In QM you have your algebra of observables and then per Gleason's theorem (or Busch's if you take POVMs) quantum states, i.e. statistical operators, can be derived as probability assignments to the observables.

Thus they do take values probabilistically, but having the operator itself be random wouldn't make much physical sense. In a lab we know if we are measuring ##S_{x}## or ##S_{z}##, based on the orientation of the Stern-Gerlach magnets for example, that doesn't fluctuate.
But within a "real" lab (i.e. not an idealized thought experiment) you don't really know if you're measuring ##\hat S_x## or ##\hat S_{x+\eta}##, where ##\eta## is some (presumably random) perturbation within experimental error.
 
  • #139
Couchyam said:
But within a "real" lab (i.e. not an idealized thought experiment) you don't really know if you're measuring ##\hat S_x## or ##\hat S_{x+\eta}##, where ##\eta## is some perturbation within experimental error.
That's handled with POVM tomography though, not randomised observables.
 
  • Like
Likes gentzen
  • #140
.Scott said:
QM doesn't lend itself to that kind of "understanding". Particles do not move like play toys. When a ball is thrown, not only does it have a starting point and a ending point, but an entire trajectory. It is always somewhere - somewhere very specific. In QM, that is a failing model. Saying that the particle "travels" is simply borrowing an available term from our language. When the photon hits the screen in the double-slit experiment, it is as though the that photon has already tested the entire slitted barrier, treated it as a hologram, worked out the entire Fourier Transform to determine what its options are, then finally generated random values to supplement the constraints of Heisenberg Uncertainty.

We can know what it's doing. We can work through the numbers. We can fully simulate the process. But it's non-trivial to create a working mental picture that follows the process as closely as we do with a tossed ball.
Simulate the process: yes. Know what "it" is doing: no. The picture of "something" that travels through the double slit is a mental image of overwhelming power. How else to explain what is going on in the experiments? But decades of brooding about the nature of photons have only produced abstract formalism. What we can most likely agree on is that there is a short "jiggling" of an electron at the source, followed a few nanoseconds later by a similar short jiggling of another electron a few meters away. QED lets us calculate the probabilities of such patterns of events. Our intuition misleads us to imagine something that "travels" from the source to the detectors. I think we should be less ambitious about "explaining" the correlations between events, and view QED as a theory that just describes the patterns of events that are scattered on the canvas of spacetime.
 
  • #141
LittleSchwinger said:
That's handled with POVM tomography though, not randomised observables.
Any experiment must account for the possibility of noise and experimental error. You might devise some way to use POVM tomography to constrain the distribution of ##\eta##, but at the end of the day there will always be some statistical uncertainty.
 
  • #142
Couchyam said:
Any experiment must account for the possibility of noise and experimental error. You might devise some way to use POVM tomography to constrain the distribution of ##\eta##, but at the end of the day there will always be some statistical uncertainty.
That's incorporated into the POVM.
 
  • Like
Likes vanhees71
  • #143
LittleSchwinger said:
That's incorporated into the POVM.
To my knowledge, POVM stands for "Projection Operator Valued Measure." Is the measure normalized to a particular number in the type of POVM that you have in mind?
 
  • #144
WernerQH said:
For me, this is just rationalization. Without realizing it, like so many physicists you have fallen victim to Bohr's tranquilizing philosophy ("Beruhigungsphilosophie", as Einstein put it). It's probably pointless to continue the discussion, if you refuse to even consider the possibility of a deeper understanding of quantum theory. Bell's qualms about the theory proved to be remarkably fertile, even leading to a kind of quantum information "industry", as you yourself have admitted. In his essay "Against Measurement" Bell argued that the axioms of such a fundamental theory should be formulated without vague notions like "measurement" or "system". But if you believe there is no better way, then nothing will convince you.
But particularly Bell's work shows that standard QT is all there is. There's no realism, i.e. there's precisely the randomness about the outcome of measurements that's observed in the many Bell tests based on Bell's work. For me physics is about what can be objectively observed and described by mathematical theories. Since QT does this with great accuracy I indeed don't know what's still to be "deeper understood". It's not Beruhigungsphilosophie but simply the best description of the empirical facts.

What's really not yet understood is quantum gravity, but that's a scientific and not a philosophical problem.
 
  • Like
Likes Paul Colby and LittleSchwinger
  • #145
Couchyam said:
To my knowledge, POVM stands for "Projection Operator Valued Measure." Is the measure normalized to a particular number in the type of POVM that you have in mind?
POVM=positive operator valued measure. It's a more general descriptions of measurements that are not von Neumann filter measurements described by projection operators, PVMs=projection-valued measures.
 
  • Like
Likes LittleSchwinger
  • #146
vanhees71 said:
POVM=positive operator valued measure. It's a more general descriptions of measurements that are not von Neumann filter measurements described by projection operators, PVMs=projection-valued measures.
And... are they typically normalized to something?
 
  • #147
Couchyam said:
And... are they typically normalized to something?
They sum up to the identity operator.
https://en.wikipedia.org/wiki/POVM

As per Naimark's dilation theorem one can explain the POVM from a PVM in a larger space. This in itself is easy to see, as it gives a distribution based on "ignorance" of some of hte reduced information. Also there is no conceptual problem here as long as we stick to sub atomicphysics. Ie. a dominant obsering context with a small quantum system.

My objection was that what is the agent can't encode this larger space? The usual thinking doesn't consider the explicit context, you can just imagine enlarging the hilbert space, mathematically, as you don't care about the context. It's just math! The "problem" does not lie in math, but the information processing of the implied information, and where this "physically" takes place, in the system, or in the environment, or where? In particle physics we know the answer, the application of QFT means all information processing are taking place in the classical environment, in the lab. And then none of my points make any sense! But if one start to picture including cosmological perspectives, where the observer is not surrounding a atomic scale the collision domain, but on the contrary immersed in the system of inquiry, then the method of enlarging the hilbert space becomes problematic to me at least.

The difference in relative size between the parts on each heisenberg cut seems critical to me. It's only when the observing side is dominant, that the effective classical reference is in place, and this is also when QFT as it stands indeed makes sense. Here it is also not necessary to worrt about "where information is processes" as the environment is so dominant that we can in principled do whatever we want and not be limited by computational speed or memory - until we start to speak about black holes (and include gravity).

/Fredrik

 
  • #148
A POVM is fairly mundane stuff. One doesn't need to talk about cosmology, black holes, Heisenberg cuts or anything else. A POVM models measurements that have errors, thermal noise, false positives, dark counts, indirect measurements via coupling to an ancilla and so on. It's just as vanhees71 has above, a more general description of measurements than PVMs.

Thus regarding Couchyam's earlier statement, we don't need to consider randomised operators when discussing imprecise, noisy, etc measurements. We just use POVMs.

vanhees71 said:
What's really not yet understood is quantum gravity, but that's a scientific and not a philosophical problem.
I agree.

My own take is that quantum theory was mostly sorted out conceptually by 1935. Heisenberg formulated the beginnings of the theory in 1925, but there were several conceptual points to be cleared up. These include details like the Mott paper on how lines observed in a bubble chamber were compatible with wave mechanics, von Neumann putting the theory on a sound mathematical footing, entanglement being first articulated, properly understanding scattering theory, that variables not commuting was not simply "disturbance" and so on.

What wasn't fully appreciated by 1935 were the deeper uses that could be made of entanglement and why the collective coordinates of macroscopic bodies, e.g. positions of planets, motion of a car, obey classical probability theory.

Entanglement has since been much better understood. For the latter we can now show how coarse-graining, decoherence, the exponential size of the Hilbert space and many other effects dampen coherence for macro-coordinates for a typical large object well beyond an inverse googleplex in magnitude.

Had there really been other problems they would have shown up in formulating QFT. The actual issues there however were completely separate: correctly formulating the Poincaré group for Hilbert spaces, renormalisation, the relation between particles and fields, treating couplings to massless states (i.e. gauge theories).
 
  • Like
Likes mattt, dextercioby and gentzen
  • #149
LittleSchwinger said:
A POVM is fairly mundane stuff. One doesn't need to talk about cosmology, black holes, Heisenberg cuts or anything else. A POVM models measurements that have errors, thermal noise, false positives, dark counts, indirect measurements via coupling to an ancilla and so on. It's just as vanhees71 has above, a more general description of measurements than PVMs.
I think what @Fra was alluding to is that a POVM is more clearly relevant to open quantum systems whose dynamics are induced by a larger 'coherent' quantum system (although I may have misinterpreted Fra's comment.) A POVM can always be expanded into a PVM theoretically, but practically there's a limit to the complexity of the Hilbert space of a typical laboratory or experimenter, and thus there may be some POVM's (for some especially complex systems) that cannot be expanded into a controlled, laboratory-friendly, bona fide PVM (although a PVM may still exist in principle.)
LittleSchwinger said:
Thus regarding Couchyam's earlier statement, we don't need to consider randomised operators when discussing imprecise, noisy, etc measurements. We just use POVMs.
I think I've found the source of confusion. My example of measuring ##S_{\hat x}## versus ##S_{\hat x+\eta}## may have been misleading, because from a purely quantum mechanical perspective, the randomness in any measurement could just be absorbed either by the state information itself, or the way in which the chosen observables partition state space (i.e. the POVM might not be a PVM.)
There is an important point to observe, however, which is very very loosely sort of related to how a thermal average is different from a quenched average in statistical mechanics. It's that at the end of the day, in a Stern Gerlach style apparatus, there is a definite classical direction ##\hat x+\vec\eta## to the magnetic field that deflects the beam, and that direction needn't change (randomly) between trials; in fact, it would probably remain approximately constant throughout the experiment. The perturbation ##\vec\eta## is determined when the experiment is set up, and might even change over time depending on how rambunctious the undergraduates are, and the difference between ##\hat x+\vec\eta## and ##\hat x## might not be explicable through an appropriate POVM. Similarly, the actual Hamiltonian of a particular experimental realization might differ slightly from the idealized theoretical version (two fields that are supposed to be perpendicular might be slightly aligned in a particular experimental setup, etc.) Does that partially clarify where I was coming from?
 
  • #150
vanhees71 said:
But particularly Bell's work shows that standard QT is all there is.
Does it? Hasn't it widened the field (quantum cryptography, quantum "teleportation", quantum computing)?
vanhees71 said:
There's no realism, i.e. there's precisely the randomness about the outcome of measurements that's observed in the many Bell tests based on Bell's work.
My conclusion is the exact opposite. I'd rather give up the "sacred" locality than realism. For me, realism means accepting the results of experiments as real; it does not mean we have to believe in the existence of photons with definite polarization states.
vanhees71 said:
For me physics is about what can be objectively observed and described by mathematical theories. Since QT does this with great accuracy I indeed don't know what's still to be "deeper understood". It's not Beruhigungsphilosophie but simply the best description of the empirical facts.
I agree that we are in the posession of a very good description, but I doubt that we have found the best formulation. Obviously you can't conceive of the possibility that quantum theory (after almost a century!) may be in a situation similar to that of electrodynamics before 1905.
 
  • Like
Likes physika, Couchyam, kurt101 and 2 others

Similar threads

Replies
4
Views
112
Replies
39
Views
2K
Replies
3
Views
1K
Replies
42
Views
8K
Replies
6
Views
2K
Back
Top