High School Does the EPR experiment imply QM is incomplete?

Click For Summary
The discussion centers on the implications of the EPR experiment for the completeness of Quantum Mechanics (QM). Participants debate whether QM's probabilistic nature suggests an underlying algorithm that governs entangled particles or if it indicates a deeper, interconnected reality. Some argue that the correct predictions of QM do not necessitate a causal relationship between distant events, emphasizing that the photons' anti-correlation does not imply faster-than-light communication. The violation of Bell inequalities is cited as evidence against classical interpretations that require local hidden variables. Ultimately, the conversation highlights the tension between classical intuitions and the non-local features of quantum phenomena, suggesting that QM may not be incomplete but rather fundamentally different from classical physics.
  • #121
stevendaryl said:
So on the one hand, you have a prediction from quantum mechanics (Schrodinger's equation) that the measurement device will be described by one state, a superposition of various possible macroscopic results.
On the other hand, you have a prediction from quantum mechanics (Born's rule) that the measurement device will be described by one or another state, where each has a definite macroscopic result.

Please can you express these alternatives as equations. I cannot make sense of the words.

[edit]

OK, the first one is a superposition and the second is not. Sorry for the dyslexia ( or something).
 
Last edited:
Physics news on Phys.org
  • #122
PeterDonis said:
These two statements are inconsistent. The SE gives you "superpositions of solutions" just by time evolution. You can't arbitrarily exclude "superpositions of solutions" and still use the SE at all.
PeterDonis said:
An eigenstate of the Hamiltonian will stay an eigenstate of the Hamiltonian, yes. But an eigenstate of the Hamiltonian has no interactions whatever--nothing ever happens to it. So no real object is ever in an eigenstate of the Hamiltonian. Any state that is a reasonable candidate to describe a real object will change under time evolution; and any state that, at some instant of time, happens to look like a reasonable classical state of a classical object, will not stay that way; it will evolve into a "Schrodinger's Cat" type state that does not describe anything like a classical state of a classical object.

I think it is implicit in the statement of @stevendaryl's 'soft paradox' that there is an operator that acts on the macroscopic wave function (in the appropriate basis) . The superposition must be between one or more of its eigenvalues. So it must have started as a preparation of a superposition. If these preparations are not allowed, then obviously we can't evolve into a superposition.

It all comes down to preparation again.
 
  • #123
Mentz114 said:
So it must have started as a preparation of a superposition. If these preparations are not allowed, then obviously we can't evolve into a superposition.

That's not correct as you state it. An eigenstate of any operator other than the Hamiltonian will not stay an eigenstate of that operator under time evolution, unless that operator commutes with the Hamiltonian. It will evolve into a superposition of eigenstates.

Some useful operators, such as total momentum and total angular momentum, commute with the Hamiltonian. But I don't think any operator that might represent a realistic observable for a macroscopic system being measured--something like center of mass position, for example--will commute with the Hamiltonian.
 
  • #124
PeterDonis said:
That's not correct as you state it. An eigenstate of any operator other than the Hamiltonian will not stay an eigenstate of that operator under time evolution, unless that operator commutes with the Hamiltonian. It will evolve into a superposition of eigenstates.

Some useful operators, such as total momentum and total angular momentum, commute with the Hamiltonian. But I don't think any operator that might represent a realistic observable for a macroscopic system being measured--something like center of mass position, for example--will commute with the Hamiltonian.
Thank you. That is interesting. Two questions about the macroscopic apparatus come to mind,
If we are measuring a quantum property, would the apparatus have the same eigenvalues as the microscopic property ?
If the observable is an angle, would the operator commute with ##\hat{H}## ?
 
  • #125
Mentz114 said:
If we are measuring a quantum property, would the apparatus have the same eigenvalues as the microscopic property ?

Eigenvalues of what operator?

When we model quantum measurements as applying an operator to the system, that operator is an operator on the Hilbert space describing the measured system, not the measured system + measuring device. The eigenvalues of the operator are therefore eigenvalues applying to the measured system only. If you think about it, this is already an admission that such a model is incomplete.

When we try to construct a more complete model, where we include the measuring device and the interaction between it and the measured system, then we no longer model measurement as applying an operator to the system; we model it as just time evolving the system using the Schrodinger Equation with the Hamiltonian including the interaction between the measuring device and the measured system. This time evolution then puts the whole system (measured system + measuring device) into an entangled state. This state is not an eigenstate of any operator (or at least, not of any operator that anyone is writing down and using in the analysis), so there aren't any useful eigenvalues that apply to it.

In such a more complete model, "measurement results" are really encoded in the labels that are put on the terms in the entangled state. For example, say we put a qubit through a spin measurement. The resulting state will look something like (leaving out normalization factors): ##\vert + \rangle \vert M_+ \rangle + \vert - \rangle \vert M_- \rangle##. Here the measurement result "measured spin up" is encoded in the kets in the first term, and the measurement result "measured spin down" is encoded in the kets in the second term. It is not encoded in eigenvalues of any operator.

Mentz114 said:
If the observable is an angle, would the operator commute with ##\hat{H}## ?

I don't think so. Measuring an angle is not the same as measuring angular momentum.
 
  • #126
Mentz114 said:
Please can you express these alternatives as equations. I cannot make sense of the words.

Let's make it simple, and suppose that we have some measurement device that measures the spin of a particle along the z-axis. For the particle, ##|u\rangle## is the state that is spin-up in the z-direction, and ##|d\rangle is the state that is spin-down. Let's suppose that ##|0\rangle## is the initial "ready" state of the device, and let ##|U\rangle## mean "measured spin-up" and ##|D\rangle## mean "measured spin down". Those are sometimes called "pointer" states.

So the assumption that the device actually works as a measuring device is that:
  1. ##|u\rangle |0\rangle \Rightarrow |u\rangle |U\rangle##
  2. ##|d\rangle |0\rangle \Rightarrow |d\rangle |D\rangle##
(where ##\Rightarrow## means "evolves into, taking into account the Schrodinger equation")

By linearity of the Schrodinger equation, it follows that:

##(\alpha |u\rangle + \beta |d\rangle)|0\rangle \Rightarrow \alpha |u\rangle |U\rangle + \beta |d\rangle |D\rangle##

So if the particle starts off in a superposition of states, then the measuring device (and the rest of the universe, eventually, but we're not modeling that) ends up in a superposition of different "pointer" states.

So that's the prediction of the Schrodinger equation. The Born rule says, instead, that:
  1. ##(\alpha |u\rangle + \beta |d\rangle)|0\rangle \Rightarrow |u\rangle |U\rangle## with probability ##|\alpha|^2##
  2. ##(\alpha |u\rangle + \beta |d\rangle)|0\rangle \Rightarrow |d\rangle |D\rangle## with probability ##|\beta|^2##
The Born gives a probabilistic transition rule leading to a definite pointer state, while the Schrodinger equation gives a deterministic transition rule leading to a superposition of pointer states. Those are different, and contradictory, predictions.
 
  • #127
Mentz114 said:
I think it is implicit in the statement of @stevendaryl's 'soft paradox' that there is an operator that acts on the macroscopic wave function (in the appropriate basis) . The superposition must be between one or more of its eigenvalues. So it must have started as a preparation of a superposition. If these preparations are not allowed, then obviously we can't evolve into a superposition.

Think about an actual measurement. You pass an electron through a Stern-Gerlach device. The electron is either diverted to the right, if it's spin-up, or it's diverted to the left, if it's spin-down. If the electron is diverted left, it makes a visible dark spot on the left side of a photographic plate. If it's diverted to the right, it makes a visible dark spot on the right side of a photographic plate.

So connecting this to the mathematics above, ##|U\rangle## is the Stern-Gerlach system plus photographic plate, in which there is a dark spot on the right. ##|D\rangle## is the state where the dark spot is on the left.
 
  • #128
stevendaryl said:
By linearity of the Schrodinger equation, it follows that:

##(\alpha |u\rangle + \beta |d\rangle)|0\rangle \Rightarrow \alpha |u\rangle |U\rangle + \beta |d\rangle |D\rangle##

So if the particle starts off in a superposition of states, then the measuring device (and the rest of the universe, eventually, but we're not modeling that) ends up in a superposition of different "pointer" states.

So that's the prediction of the Schrodinger equation. The Born rule says, instead, that:
  1. ##(\alpha |u\rangle + \beta |d\rangle)|0\rangle \Rightarrow |u\rangle |U\rangle## with probability ##|\alpha|^2##
  2. ##(\alpha |u\rangle + \beta |d\rangle)|0\rangle \Rightarrow |d\rangle |D\rangle## with probability ##|\beta|^2##
The Born gives a probabilistic transition rule leading to a definite pointer state, while the Schrodinger equation gives a deterministic transition rule leading to a superposition of pointer states. Those are different, and contradictory, predictions.
They are not contradictory if they describe different aspects of reality.
One describes phase difference between different outcomes (HVs).
Other prediction describes actual outcome (when HV is revealed).

In case of microscopic systems you can't perform both measurements at the same time. In case of macroscopic systems we don't know how to measure phase relationship (perform interference measurement) between different outcomes, but hypothetically if we would know how to perform that interference measurement we can say that we would not be able to learn what was actual outcome in particular case.
 
  • Like
Likes Mentz114
  • #129
stevendaryl said:
Let's make it simple, and suppose that we have some measurement device that measures the spin of a particle along the z-axis. For the particle, ##|u\rangle## is the state that is spin-up in the z-direction, and ##|d\rangle is the state that is spin-down. Let's suppose that ##|0\rangle## is the initial "ready" state of the device, and let ##|U\rangle## mean "measured spin-up" and ##|D\rangle## mean "measured spin down". Those are sometimes called "pointer" states.

So the assumption that the device actually works as a measuring device is that:
  1. ##|u\rangle |0\rangle \Rightarrow |u\rangle |U\rangle##
  2. ##|d\rangle |0\rangle \Rightarrow |d\rangle |D\rangle##
(where ##\Rightarrow## means "evolves into, taking into account the Schrodinger equation")

By linearity of the Schrodinger equation, it follows that:

##(\alpha |u\rangle + \beta |d\rangle)|0\rangle \Rightarrow \alpha |u\rangle |U\rangle + \beta |d\rangle |D\rangle##

So if the particle starts off in a superposition of states, then the measuring device (and the rest of the universe, eventually, but we're not modeling that) ends up in a superposition of different "pointer" states.

So that's the prediction of the Schrodinger equation. The Born rule says, instead, that:
  1. ##(\alpha |u\rangle + \beta |d\rangle)|0\rangle \Rightarrow |u\rangle |U\rangle## with probability ##|\alpha|^2##
  2. ##(\alpha |u\rangle + \beta |d\rangle)|0\rangle \Rightarrow |d\rangle |D\rangle## with probability ##|\beta|^2##
The Born gives a probabilistic transition rule leading to a definite pointer state, while the Schrodinger equation gives a deterministic transition rule leading to a superposition of pointer states. Those are different, and contradictory, predictions.
I don't see any ##e^{\hat{H}t}## in there. Unless you have explicitly different evolutions I can't see any contradiction. The state of the macroscopic apparatus needs to be there also because the point in question is final state. You need to give the assumptions concerning preparation of the microscopic state and the apparatus.
 
  • #130
Mentz114 said:
I don't see any ##e^{\hat{H}t}## in there.

Yeah, well that's the stumbling block for any discussion of the application of quantum mechanics to large systems is that it's difficult to do it rigorously.

Unless you have explicitly different evolutions I can't see any contradiction.

That's why I call it a "soft contradiction". The computations are intractable to actually derive the state from first principles.

However, we know that an electron hitting a photographic plate does cause a dark spot on the plate. To believe that this is not described by quantum mechanics seems to be equivalent to believing that quantum mechanics is incorrect or incomplete.

I spent a certain amount of time studying AI and one of the systems I looked at was the CYC project (I don't know whether that's been abandoned, or not). CYC had a notion of "microtheories" which described a particular small domain. For a physics example, statics. It knew how to reason within a microtheory. Then there were heuristics on top of the microtheories to decide which microtheory to use in what circumstances.

It was suspected that the microtheories were actually inconsistent, but it was a "soft" inconsistency, because you never applied more than one microtheory at a time.

Quantum mechanics is similarly composed of two microtheories for describing systems: For small systems, you use the Schrodinger equation. For large systems, you use the Born rule.
 
Last edited:
  • Like
Likes zonde
  • #131
PeterDonis said:
Eigenvalues of what operator?

[cut for brevity]
Thanks for trying to unconfuse me. I have to keep asking stuff that should be included in the description of the problem and I have asked stevendaryl for clarification.
There may be a contradiction in the requirement that the apprartus state should be highly correlated with the incoming system unless the apparatus has some similarities with the incoming state - viz. common eigenvalues which implies an operator (?). The states being evolved must include the apparatus and the outcome depends only on the Hamiltonian and the initial state. This does not always result in a superposition but I don't know if that is relevant because the 'contradiction' is not unambiguously defined.
 
  • #132
zonde said:
They are not contradictory if they describe different aspects of reality.
One describes phase difference between different outcomes (HVs).
Other prediction describes actual outcome (when HV is revealed).

In case of microscopic systems you can't perform both measurements at the same time. In case of macroscopic systems we don't know how to measure phase relationship (perform interference measurement) between different outcomes, but hypothetically if we would know how to perform that interference measurement we can say that we would not be able to learn what was actual outcome in particular case.
Yes, that is true. In 'collapse' terms the the first case is uncollapsed, but the second is after measurement. That is pretty important.
 
  • #133
zonde said:
They are not contradictory if they describe different aspects of reality.

They predict different future probabilities, so they are contradictory predictions. To see this, suppose there is a final state ##|final\rangle##. Let ##\psi_{U}## be the probability of making a transition from ##|u\rangle |U\rangle## to the state ##|final\rangle##. Let ##\psi_{D}## be the probability of making a transition from ##|d\rangle |D\rangle## to the state ##|final\rangle##.

Then, the probability of later observing the system in the state ##|final\rangle## will be

##|\psi_{U}|^2 |\alpha|^2 + |\psi_{D}|^2 |\beta|^2##

under the assumption that the composite system nondeterministically transitioned to ##|u\rangle |U\rangle##, with probability ##|\alpha|^2## or to ##|d\rangle |D\rangle##, with probability ##|\beta|^2##.

In contrast, if the composite system is in the superposition ##\alpha |u\rangle |U\rangle + \beta |d\rangle |D\rangle##, then the probability of ending up in state ##|final\rangle## will be given by:

##|\psi_{U}|^2 |\alpha|^2 + |\psi_{D}|^2 |\beta|^2 + \alpha^* \psi_{U}^* \beta \psi_{D} +\beta^* \psi_{D}^* \alpha \psi_{U} ##

Those are different, and inconsistent, predictions.
 
  • #134
stevendaryl said:
They predict different future probabilities, so they are contradictory predictions. To see this, suppose there is a final state ##|final\rangle##. Let ##\psi_{U}## be the probability of making a transition from ##|u\rangle |U\rangle## to the state ##|final\rangle##. Let ##\psi_{D}## be the probability of making a transition from ##|d\rangle |D\rangle## to the state ##|final\rangle##.

Then, the probability of later observing the system in the state ##|final\rangle## will be

##|\psi_{U}|^2 |\alpha|^2 + |\psi_{D}|^2 |\beta|^2##

under the assumption that the composite system nondeterministically transitioned to ##|u\rangle |U\rangle##, with probability ##|\alpha|^2## or to ##|d\rangle |D\rangle##, with probability ##|\beta|^2##.

In contrast, if the composite system is in the superposition ##\alpha |u\rangle |U\rangle + \beta |d\rangle |D\rangle##, then the probability of ending up in state ##|final\rangle## will be given by:

##|\psi_{U}|^2 |\alpha|^2 + |\psi_{D}|^2 |\beta|^2 + \alpha^* \psi_{U}^* \beta \psi_{D} +\beta^* \psi_{D}^* \alpha \psi_{U} ##

Those are different, and inconsistent, predictions.
That is clearer. Coherence is the difference as Zonde pointed out. Is this not the problem that the decoherence theory addresses ?

In the case where ##\psi_{U}## and ##\psi_D## are orthogonal do the cross-terms disappear ?
 
Last edited:
  • #135
stevendaryl said:
They predict different future probabilities, so they are contradictory predictions. To see this, suppose there is a final state ##|final\rangle##. Let ##\psi_{U}## be the probability of making a transition from ##|u\rangle |U\rangle## to the state ##|final\rangle##. Let ##\psi_{D}## be the probability of making a transition from ##|d\rangle |D\rangle## to the state ##|final\rangle##.

Then, the probability of later observing the system in the state ##|final\rangle## will be

##|\psi_{U}|^2 |\alpha|^2 + |\psi_{D}|^2 |\beta|^2##

under the assumption that the composite system nondeterministically transitioned to ##|u\rangle |U\rangle##, with probability ##|\alpha|^2## or to ##|d\rangle |D\rangle##, with probability ##|\beta|^2##.

In contrast, if the composite system is in the superposition ##\alpha |u\rangle |U\rangle + \beta |d\rangle |D\rangle##, then the probability of ending up in state ##|final\rangle## will be given by:

##|\psi_{U}|^2 |\alpha|^2 + |\psi_{D}|^2 |\beta|^2 + \alpha^* \psi_{U}^* \beta \psi_{D} +\beta^* \psi_{D}^* \alpha \psi_{U} ##

Those are different, and inconsistent, predictions.
You are considering setup where ##|d\rangle |D\rangle## and ##|u\rangle |U\rangle## can end up in the same state ##|final\rangle##. This is clearly interference measurement. But you ignore phase relationship between two initial states, so you are assuming that the two initial states have decohered.
On the other hand for the state ##\alpha |u\rangle |U\rangle + \beta |d\rangle |D\rangle## you assume no decoherence.
Can you explain your reasoning vs coherence/decoherence?
To me it seems like you are using two different microtheories for the two cases. In one microtheory each individual measurement outcome is completely independent from the rest of the world and/or from any other measurement in ensemble and so there can be no physical basis to consider any relative phase relationship between outcomes.
In the other microtheory two measurement outcomes sort of exist in two parallel but interacting worlds at the same time so there are means how to consider relative phase relationship between two outcomes.

Well, I'm not sure that inconsistency will persist if you would consider both cases in the same microtheory.
 
  • #136
zonde said:
You are considering setup where ##|d\rangle |D\rangle## and ##|u\rangle |U\rangle## can end up in the same state ##|final\rangle##. This is clearly interference measurement. But you ignore phase relationship between two initial states, so you are assuming that the two initial states have decohered.

Yes, that's the reason I call it a "soft" contradiction. You can't actually calculate phases accurately enough to calculate interference effects between macroscopic objects. But if we assumed unlimited computation power, we could in principle see the difference between the two evolution equations.

Decoherence is a matter of computational ability. Two systems have decohered if it is in practice impossible to accurately predict the phase relationship between them. I actually do not think that decoherence has any role in understanding the foundations of quantum mechanics, although it does explain why in practice we don't see interference effects for macroscopic objects.

Well, I'm not sure that inconsistency will persist if you would consider both cases in the same microtheory.

That's my point: the two microtheories are: (1) smooth evolution according to Schrodinger's equation, and (2) getting definite results of measurements according to the Born rule. It's the inconsistency between these two that I'm pointing out.
 
Last edited:
  • #137
Mentz114 said:
That is clearer. Coherence is the difference as Zonde pointed out. Is this not the problem that the decoherence theory addresses ?

In the case where ##\psi_{U}## and ##\psi_D## are orthogonal do the cross-terms disappear ?

They don't disappear. They become negligible. That's why I call it a "soft" contradiction. The differences between the predictions are in practice impossible to observe. But they are different predictions.

Just for clarification, ##\psi_U## and ##\psi_D## are just complex numbers, not functions.

##\psi_U = \langle final |e^{-iHt}|u,U\rangle##
##\psi_D = \langle final | e^{-iHt}|d,D\rangle##
 
  • #138
stevendaryl said:
Decoherence is a matter of computational ability. Two systems have decohered if it is in practice impossible to accurately predict the phase relationship between them.

Actually, there are two things going on decoherence: One, as I said, is just the practical impossibility of computing phase relationships between states of a large system. The second is that if the system becomes entangled with yet other systems, then interference effects are not possible between states of the subsystem, only between different states of the larger composite system. And rapidly, that larger system becomes the entire universe (or the near-by part of it).
 
  • #139
stevendaryl said:
You can't actually calculate phases accurately enough to calculate interference effects between macroscopic objects. But if we assumed unlimited computation power, we could in principle see the difference between the two evolution equations.
I'm not sure I understand.
Do not assume there is any decoherence. The state that you would use for predicting outcome of interference measurement between ##|d\rangle |D\rangle## and ##|u\rangle |U\rangle## then should be ##\alpha |u\rangle |U\rangle + \beta |d\rangle |D\rangle##. Exactly the same as in the second case because they are the same case. And your Born probabilities remain hidden variables that you can't observe because you performed interference measurement.

On the other hand if you assume decoherence in both cases then the state ##\alpha |u\rangle |U\rangle + \beta |d\rangle |D\rangle## becomes unphysical idealization. Again there is no inconsistency.

I guess you have your viewpoint without HVs and you are saying it is inconsistent. And my suggestion that bringing HVs into the picture makes it consistent is not satisfactory for you.
 
  • #140
I guess one of the questions about decoherence is why one basis is preferred over another. I would say that position basis is preferred because particles form bond states when they are near each other. That determines that macro world "lives" in position basis.
 
  • #141
zonde said:
I'm not sure I understand.
Do not assume there is any decoherence. The state that you would use for predicting outcome of interference measurement between ##|d\rangle |D\rangle## and ##|u\rangle |U\rangle## then should be ##\alpha |u\rangle |U\rangle + \beta |d\rangle |D\rangle##. Exactly the same as in the second case because they are the same case. And your Born probabilities remain hidden variables that you can't observe because you performed interference measurement.

I don't know what you mean by "remain hidden".

If you use the postulate that a measurement always results in an eigenvalue of the thing that is measured, then there would be no interference between the two alternatives. If you use Schrodinger's equation to compute the probabilities, then there would be interference. So the two axioms are contradictory. They predict different probabilities for winding up in the state ##|final\rangle##.

[edit]This assumes that measurement is a physical process by which a microscopic variable is amplified to make a macroscopic difference in the measuring device. If you define measurement to mean "a conscious observer becomes aware of the result" then nothing that devices do can be considered a measurement.
 
  • #142
stevendaryl said:
If you use the postulate that a measurement always results in an eigenvalue of the thing that is measured, then there would be no interference between the two alternatives.
Can you explain why do you claim that? Because I can imagine different reasons why do you think so, and I would like to avoid guessing.
 
  • #143
Not having followed this thread in detail, I think the problem between your mutual understanding is the usual one not to distinguish clearly between measurements of observables on a system and the preparation of the system in some state. Let's keep the story as simple as possible and assume a non-degenerate observable ##A## to be measured. This means that the eigenspaces for each eigenvalue of the corresponding representing self-adjoint operator ##\hat{A}## are all one-dimensional. Let's denote the corresponding normalized eigenvectors as ##|u_a \rangle##, where ##a## denotes the eigenvalue. Also let's suppose these are true eigenvectors, normalizable to 1 and the spectrum of possible eigenvalues ##a## thus discrete.

Let's furter assume the system is prepared in a pure state, represented by the statistical operator ##|\psi \rangle \langle \psi|## with ##|\psi \rangle## a normalized vector. The only meaning, within the minimal interpretation, of this state is that the probability to find the value ##a## when the observable ##A## is measured is given according to Born's rule by
$$W_{\psi}(a)=|\langle u_a|\psi \rangle|^2.$$
This implies, for this most simple case, that the system is prepared with a determined value ##a## of the observable ##A## if and only if ##|\psi \rangle \langle \psi|=|u_a \rangle \langle u_a|##, and then ##W_{\psi}(a)=1## as it must be.

Otherwise
$$|\psi \rangle=\sum_{a} \psi_a |u_a \rangle$$
is in a superposition of eigenvectors of ##\hat{A}## and thus
$$W_{\psi}(a)=|\psi_a|^2.$$
For such an ideal measurement, each outcome at an individual system is always one of the possible values ##a## of ##A##, which are the eigenvalues of ##\hat{A}##, but since the observable ##A## doesn't take a determined value there are only the given probabilities and nothing else due to the preparation of the system in the state ##|\psi \rangle \langle \psi|##.

What's usually meant when one talks about the "measurement problem" is this very assumption that there's always a well-defined outcome when ##A## is measured, no matter whether the value of ##A## is determined or note due to the preparation of the system. This is, however, only a metaphysical problem. From a physical, i.e., operational point of view of preparations (defining the quantum state) and measurements, there's no such problem as long as all observations agree with this postulate of Born's rule.
 
  • #144
vanhees71 said:
What's usually meant when one talks about the "measurement problem" is this very assumption that there's always a well-defined outcome when ##A## is measured, no matter whether the value of ##A## is determined or note due to the preparation of the system. This is, however, only a metaphysical problem. From a physical, i.e., operational point of view of preparations (defining the quantum state) and measurements, there's no such problem as long as all observations agree with this postulate of Born's rule.
The question is not about "measurement problem" itself but rather about inconsistency that this "measurement problem" creates (or does not create) in the model.
In simple words the question is where do the relative phase factors go when measurement is performed. Schrodinger's equation says that relative phase factors live on, while Born rule leaves no place for them to live on. @stevendaryl says there is inconsistency because relative phase factors are fine according to Schrodinger's equation, but they die according to Born rule. I say that relative phase factors could be just fine even after we apply Born rule.
 
  • #145
vanhees71 said:
What's usually meant when one talks about the "measurement problem" is this very assumption that there's always a well-defined outcome when ##A## is measured, no matter whether the value of ##A## is determined or note due to the preparation of the system. This is, however, only a metaphysical problem. From a physical, i.e., operational point of view of preparations (defining the quantum state) and measurements, there's no such problem as long as all observations agree with this postulate of Born's rule.

My point is that the assumption that there is a well-defined (single) outcome is contradicted by unitary evolution, if you consider the measurement device itself to be a quantum system.
 
  • #146
zonde said:
The question is not about "measurement problem" itself but rather about inconsistency that this "measurement problem" creates (or does not create) in the model.
In simple words the question is where do the relative phase factors go when measurement is performed. Schrodinger's equation says that relative phase factors live on, while Born rule leaves no place for them to live on. @stevendaryl says there is inconsistency because relative phase factors are fine according to Schrodinger's equation, but they die according to Born rule. I say that relative phase factors could be just fine even after we apply Born rule.
This I don't understand. The relative phase factors are the very point that "matter waves" have been introduced and modern QT was discovered in the first place. The relative phases are crucial for, e.g., the result of the double-slit experiment, showing interference fringes in the probability distribution (later demonstrated to be correct by Davisson and Germer with electrons for the first time).
 
  • #147
stevendaryl said:
My point is that the assumption that there is a well-defined (single) outcome is contradicted by unitary evolution, if you consider the measurement device itself to be a quantum system.
This I never understood either. Measurement devices as macroscopic objects are never described by unitary time evolution but by "master equations" of "open quantum systems". That's the whole point of the decoherence program. There's no contradiction between unitary time evolution for closed systems and effective descriptions of macroscopic, i.e., heavily coarse-grained, observables.
 
  • #148
vanhees71 said:
This I never understood either. Measurement devices as macroscopic objects are never described by unitary time evolution but by "master equations" of "open quantum systems".

Sure. But I'm talking about an idealized situation in which you have an isolated composite system that consists of a measurement device plus the system that it's measuring. You can describe the composite system quantum mechanically.

In the real world, of course, there are interactions between any macroscopic measuring device and the rest of the universe: It's interacting gravitationally and electromagnetically and so forth. But in principle, we can consider an isolated system that contains a measuring device. If the theory is inconsistent in that case, it shows that the theory is inconsistent, period.
 
  • #149
Measurements are done with real-world macroscopic apparati. I let these problems happily to the philosophers to have some food of thought for writing papers with more footnotes than main text...
 
  • #150
vanhees71 said:
Measurements are done with real-world macroscopic apparati. I let these problems happily to the philosophers to have some food of thought for writing papers with more footnotes than main text...

That's why I said that possibly only philosophers care whether our theories are consistent. But regardless of your attitude toward it, if a theory is inconsistent, then it can't be actually correct. So that gets back to the claim that quantum mechanics is incomplete, or at least, our understanding of it is incomplete.
 

Similar threads

Replies
11
Views
2K
  • · Replies 29 ·
Replies
29
Views
3K
  • · Replies 27 ·
Replies
27
Views
3K
  • · Replies 58 ·
2
Replies
58
Views
5K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
1K
Replies
13
Views
4K
  • · Replies 47 ·
2
Replies
47
Views
5K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 43 ·
2
Replies
43
Views
6K