B How Does Environmentally Induced Decoherence Affect Quantum State Reduction?

  • B
  • Thread starter Thread starter Feeble Wonk
  • Start date Start date
  • Tags Tags
    Decoherence
  • #51
bhobba said:
Your 'clearly' logic escapes me.
Probably that's because you understand the problem differently.
As I see it, the problem with factorization is that there is no way how interaction can be included into state vector of the universe
or let's say it is not clear how interaction and interacting parts can be unequivocally defined given state vector of the universe equipped with unitary evolution.
So a way to explain it would be to show what is needed to end up with "detector" interacting with "what it observes" but not start with these things.
 
Physics news on Phys.org
  • #52
bhobba said:
In particular you need to understand mixed states and the difference between proper mixed states and improper ones.
I think I understand it. Proper mixed state is "particle here" plus "particle there".
Improper mixed state is "superposition of particle here and particle there" plus "superposition of particle here and particle there but with different phase".

The problem is that improper mixed state shows that there can be no interference even without collapse. And that means observation does not have to collapse wavefunction and now we have even less ideas how to arrive at randomized detections that contribute to the same statistical result.

Basically decoherence breaks "collapse" explanation but does not give anything in place.
bhobba said:
There is no wave-function after decoherence because its in a mixed state.
You don't describe individual particle with density matrix. Of course there is wave-function for individual particle after decoherence.
 
  • #53
zonde said:
I think I understand it. Proper mixed state is "particle here" plus "particle there".
Improper mixed state is "superposition of particle here and particle there" plus "superposition of particle here and particle there but with different phase".

Wrong:
http://pages.uoregon.edu/svanenk/solutions/Mixed_states.pdf

I suggest you spend some time becoming familiar with the concepts mathematically.

When you can explain it, mathematically, in a post, that's when you will understand it.

Here is the outline. Quantum states, despite what you may have read, are not elements of a vector space, they are positive operators of unit trace. By definition operators of the form |u><u| are called pure states - they are the usual states because they can be mapped to a vector space. All other states are called mixed and it can be shown they are the convex sum of pure states ie of the form Σ ci |bi><bi| where the ci are positive and sum to one. If you have an observation whose outcomes are the |bi><bi| then the Born Rule shows ci is the probability of getting |bi><bi|. Note - and this is very very important - a mixed state is NOT a superposition.

Once that is understood then the difference between improper and proper mixed states can be explained.

Thanks
Bill
 
  • #54
bhobba said:
Quantum states, despite what you may have read, are not elements of a vector space, they are positive operators of unit trace. By definition operators of the form |u><u| are called pure states - they are the usual states because they can be mapped to a vector space.
From the link you gave:
"A pure state of a quantum system is denoted by a vector (ket) |\psi\rangle with unit length, i.e. \langle\psi|\psi\rangle = 1, in a complex Hilbert space H."

So you are telling one thing but give links that say other other things.
bhobba said:
I suggest you spend some time becoming familiar with the concepts mathematically.
You have not demonstrated such a competence that I should take from you suggestions about how to spend my time. Considering this it's a rude remark.
 
  • #55
zonde said:
"A pure state of a quantum system is denoted by a vector (ket) |\psi\rangle with unit length, i.e. \langle\psi|\psi\rangle = 1, in a complex Hilbert space H." So you are telling one thing but give links that say other other things

No I am not.

Its basic linear algebra that |u><u|, as I said, 'can be mapped to a vector space'. In particular its because the operators are of unit trace they are of unit length. Here is the proof. Write the |u> in |u><u| as c|u'> where |u'> is of unit length. Since its of unit trace, trace (|u><u|) = 1 = |c|^2 trace (|u'><u'| = |c|^2 = 1 ie |u> is of unit length.

You will not make progress in QM until you understand basic linear algebra and the Dirac notation.

zonde said:
You have not demonstrated such a competence that I should take from you suggestions about how to spend my time. Considering this it's a rude remark.

Instead of casting doubt on my competence a more fruitful approach is to ask for an explanation if something seems contradictory. You will learn more that way.

Thanks
Bill
 
Last edited:
  • #56
I've been reading about Factorization that Hobba recommended and have this question.

In radioactive decays. There is phase randomization and collapse. In many worlds without factorization, does it mean there was no radioactive decay or this couldn't occur?
 
  • #57
For further problems with the usual attempted explanation of classical emergence through 'decoherence', see http://tinyurl.com/hn3g2tj
Comments welcome.
 
  • #58
jlcd said:
I've been reading about Factorization that Hobba recommended and have this question.

I doubt if I recommended anything on factorisation. While a legit issue IMHO far too much is made of it.

Thanks
Bill
 
  • #59
rkastner said:
Comments welcome.

'The idea that unitary-only dynamics can lead naturally to preferred observables, such that decoherence suffices to explain emergence of classical phenomena (e.g., Zurek 2003) has been shown in the peer-reviewed literature to be problematic. However, claims continue to be made that this approach, also known as ‘Quantum Darwinism,’ is the correct way to understand classical emergence.'

Obviously it cant. An extra interpretive assumption is required. That doesn't mean however it's not the correct way to go, I don't think it is, but that means diddly squat. I read a lot on decoherence and QM interpretations but I can't recall anyone making claims it solves interpretive issue by itself. Occasionally we see posts here making that or similar claims - myself or others quickly point out its simply not possible - and pretty obviously so.

Thanks
Bill
 
  • #60
bhobba said:
Wrong:
http://pages.uoregon.edu/svanenk/solutions/Mixed_states.pdf

I suggest you spend some time becoming familiar with the concepts mathematically.

When you can explain it, mathematically, in a post, that's when you will understand it.

Here is the outline. Quantum states, despite what you may have read, are not elements of a vector space, they are positive operators of unit trace. By definition operators of the form |u><u| are called pure states - they are the usual states because they can be mapped to a vector space. All other states are called mixed and it can be shown they are the convex sum of pure states ie of the form Σ ci |bi><bi| where the ci are positive and sum to one. If you have an observation whose outcomes are the |bi><bi| then the Born Rule shows ci is the probability of getting |bi><bi|. Note - and this is very very important - a mixed state is NOT a superposition.

Once that is understood then the difference between improper and proper mixed states can be explained.

Thanks
Bill

All what you write is correct and pedagogical.
But in this post zonde has a probleme with proper and improper mixed states.
You give him a link about pure and impure states. the problem is that the words proper and improper cannot be found in the paper.
Have you a link with the mathematical machinery for proper and improper mixed states or do you think that the difference is a question of interpretation?
 
  • #62
I find no mathematics in this link behind proper and improper states.
just words like you prepare, you ignore and so on. In Everett thesis the observer is a system,it is a part of the theory. When it has observed something it is in a given state, if it reads it again it is in another state. The physical memory is a part of the model.
I am looking for something like that behind proper and improper states.
 
  • #63
naima said:
I find no mathematics in this link behind proper and improper states.

Its not a mathematical difference. Its a preparation difference. I have written on this many many times so one more time. A proper mixture is when states are randomly presented for observation. An improper mixture is one that was not prepared that way. Its simple and I will not pursue it further here in an old thread that has been resurrected..

Thanks
Bill
 
  • #64
bhobba said:
Its not a mathematical difference. Its a preparation difference. I have written on this many many times so one more time. A proper mixture is when states are randomly presented for observation. An improper mixture is one that was not prepared that way. Its simple and I will not pursue it further here in an old thread that has been resurrected..

There is actually a theorem involved in the claim that there is no mathematical difference. I forgot where I read this, but someone proved a theorem to the effect that every mixed state is obtainable by tracing out degrees of freedom from a pure state. (In general, the pure state might belong to a larger, fictitious Hilbert space, though).
 
  • Like
Likes bhobba
  • #65
What is an improper vs. a proper mixed state? Any state is represented a trace-class positive semidefinite self-adjoint propagator with trace 1, the statistical operator. You can distinguish pure states, where the statistical operator is a projection operator and mixed states, where it is not. If your system is in a state described by a statistical operator all you know about it are the probabilities for outcomes of measurements. It doesn't matter how the system has been prepared in this state. I don't get the point of what's written on page 10 of the cited article in #61. How do you distinguish (by observations) between case 2 and 3? According to standard quantum theory there is no possibility to distinguish the two cases!
 
  • #66
vanhees71 said:
What is an improper vs. a proper mixed state?

An improper mixed state is one obtained by starting with the density matrix for a pure state, and then tracing over some of the degrees of freedom. So it's really where it came from, rather than the results. The result is the same, whether it's proper or improper.
 
  • #67
But, how can you distinguish proper from improper mixed states? In the example in the paper you end up with unpolarized particles in both cases, described by the stat. op. ##\hat{\rho}=1/2 \hat{1}##. Imho there's no way to distinguish the two cases with measurements made only with particle A (which is why you trace out particle B in this example). Only if you make joint measurements on both particle A and particle B you can observe the correlations implied by the preparation in the pure two-particle state.
 
  • #68
vanhees71 said:
But, how can you distinguish proper from improper mixed states?

They can't be distinguished.
 
  • Like
Likes bhobba
  • #69
vanhees71 said:
But, how can you distinguish proper from improper mixed states? In the example in the paper you end up with unpolarized particles in both cases, described by the stat. op. ##\hat{\rho}=1/2 \hat{1}##. Imho there's no way to distinguish the two cases with measurements made only with particle A (which is why you trace out particle B in this example). Only if you make joint measurements on both particle A and particle B you can observe the correlations implied by the preparation in the pure two-particle state.

You can distinguish a proper from an improper mixed state by measuring a nonlocal variable. An example is given in http://philsci-archive.pitt.edu/5439/1/Decoherence_Essay_arXiv_version.pdf Secion 1.2.3 on p10.
 
  • #70
Of course, but then I don't use the reduced description but the state of the full system. It was exactly the example on p 10 of the above mentioned paper which lead to my question. It's as often in these interpretational discussions much ado about nothing!
 
  • #71
vanhees71 said:
Of course, but then I don't use the reduced description but the state of the full system. It was exactly the example on p 10 of the above mentioned paper which lead to my question. It's as often in these interpretational discussions much ado about nothing!

It is not much ado about nothing. Not distinguishing these has mislead some into believing that decoherence solves the measurement problem, including physicists as distinguished as Anderson. If you don't like interpretation, it must be noted that most great physicists cared deeply about it. In modern times, one can read the comments about the importance of the distinction between proper and improper mixed states in https://www.amazon.com/dp/0198509146/?tag=pfamazon01-20.
 
Last edited by a moderator:
  • Like
Likes eloheim and rkastner
  • #72
atyy said:
It is not much ado about nothing. Not distinguishing these has mislead some into believing that decoherence solves the measurement problem, including physicists as distinguished as Anderson. If you don't like interpretation, it must be noted that most great physicists cared deeply about it. In modern times, one can read the comments about the importance of the distinction between proper and improper mixed states in https://www.amazon.com/dp/0198509146/?tag=pfamazon01-20.

Yeah, with the various threads on why quantum mechanics is not weird, I've been trying to clarify in my mind exactly why I still think it is weird. It's definitely the measurement problem, but I have a hard time formalizing exactly why it bothers me. But roughly speaking, orthodox quantum mechanics seems a little schizophrenic. On the one hand, most people like to assume that there is nothing going on in a measurement process that cannot be explained by quantum mechanics. But if you try to describe the whole composite system (system being measured plus system doing the measuring) using quantum mechanics, then I don't see that anything vaguely like the QM collapse postulate--after a measurement, the system is an eigenstate of the property being measured--happens. I don't see anything vaguely like the more minimal description--you get some eigenvalue with probabilities given by the Born rule--happens, either. If we are using QM to describe the composite system, then it's hard to see why there should be definite outcomes for measurements at all, or why probabilities come into play at all.

Decoherence is where the schizophrenia comes in. If you take the density matrix of the complete system, and trace out the environmental degrees of freedom, then you end up with a mixed-state density matrix. But then people want to interpret the mixed state using the ignorance interpretation. That doesn't make sense to me--you KNOW that the mixed state didn't arise from ignorance about the true state, because you just created the mixed state by tracing out the environmental degrees of freedom. It seems as though you're willfully forgetting what you just did.

So to me, orthodox QM just doesn't make sense. Maybe one of the other interpretations--objective collapse, or many-worlds, or Bohmian mechanics--makes sense, but the orthodox interpretation doesn't. It seems like people are willfully fooling themselves.
 
Last edited by a moderator:
  • Like
Likes eloheim and Nugatory
  • #73
Well, maybe in realizing that you have to think hard to figure out what bothers you you only realized that there's nothing to bother about. Quantum theory tells us that nature is inherently stochastic/probabilistic/statistical. So what? That's how it is!

What is (or was for quite a while) an interesting theoretical challenge is that in our everyday experience macroscopic objects obey almost exactly the laws of classical physics, and we do not see quantum interference effects at macroscopic objects. That's why it took some time to discover the quantum behavior (starting with black-body radiation in the late 1880ies). I think, contrary to what atyy said in #71 that is clearly solved by decoherence and that we are simply not able to resolve the fast scales of dynamics of microscopic scales for many-body systems. So we get the classical world from coarse graining the description of the macroscopically relevant slow observables at macroscopic scales. It must also have to do with the formalism of the renormalization group in QFT/stat phys. The Wilsonian interpretation is precisely that picture of effective theories on low energy-momentum (slow and long-distance scale varying) scales emerging from more microscopic theories which reveal themselves only at high energy-momentum (fast and short-distance) scales. In this sense classical theory is an effective theory of quantum theory with some range of applicability.

The socalled measurement problem is then simply the question of how microscopic systems, sufficiently isolated from the environment to reveal quantum behavior, interact with the measurement apparatus, which provides "the environment" in being necessarily the "classicality condition" of measurement apparati as already discussed by Bohr in the early 1930ies (i.e., before Heisenberg confused the quantum community with his collapse in the 50ies ;-)).

What always has bothered me before I learned about the works on decoherence was this quantum-classical cut, introduced ad hoc as an explanation for the classical behavior of measurement apparati and the even more ad hoc assumption of a collapse of the state which in almost all real measurements never occur, because the quantum object is "destroyed" in the measurement process and thus it's not even necessary to find a description as an isolated quantum system anymore. What happens at or shortly after the "measurement" is entirely a property of the measurement apparatus and not of a general theory/model of the world.
 
  • Like
Likes Nugatory
  • #74
vanhees71 said:
Well, maybe in realizing that you have to think hard to figure out what bothers you you only realized that there's nothing to bother about. Quantum theory tells us that nature is inherently stochastic/probabilistic/statistical. So what? That's how it is!

I am not arguing with either you or stevendaryl here (I have a great deal of sympathy for both positions) but the only takeaway here may be that the two have you have different thresholds for weirdness. There is a strong element of personal taste involved when considering whether an internally consistent and empirically supported position is also satisfactory.
 
  • Like
Likes bhobba
  • #75
vanhees71 said:
Well, maybe in realizing that you have to think hard to figure out what bothers you you only realized that there's nothing to bother about. Quantum theory tells us that nature is inherently stochastic/probabilistic/statistical.

But it doesn't tell us that, at all. The only way that probabilities come into it is by the dubious steps of separating the measurement apparatus from the thing being measured, and then treating the former in a way that is inconsistent with the way the latter is treated.
 
  • #76
stevendaryl said:
But it doesn't tell us that, at all. The only way that probabilities come into it is by the dubious steps of separating the measurement apparatus from the thing being measured, and then treating the former in a way that is inconsistent with the way the latter is treated.

Specifically, you treat the system being measured as something whose state evolves unitarily according to Schrodinger's equation, and you treat the measuring device as something that has definite outcomes for measurements. That seems inconsistent to me.
 
  • #77
vanhees71 said:
(i.e., before Heisenberg confused the quantum community with his collapse in the 50ies ;-)).
It was von Neumann who in his 1932 book, where he made QM mathematically fully respectable, also made the collapse (then called state reduction) definite and prominent. Bohm then coined 1951 the name collapse for state reduction. Many people from the quantum optics community finally observed in 1986+ the collapse as quantum jumps in certain continuous measurements of single atoms in an ion trap, so that it is now in various quantum optics books; see, e.g., Section 8.2 of Gerry & Knight 2005.

It is not appropriate to blame Heisenberg for all this - I don't even know what Heisenberg contributed.
 
Last edited:
  • #78
Nugatory said:
I am not arguing with either you or stevendaryl here (I have a great deal of sympathy for both positions) but the only takeaway here may be that the two have you have different thresholds for weirdness. There is a strong element of personal taste involved when considering whether an internally consistent and empirically supported position is also satisfactory.

But that is not the issue. Bohr's position is fine - it's weird live with it, we can do science with it. Dirac's position is also fine - it's weird, but will presumably be solved by quantum theory not being the final theory.

What vanhees71 is claiming is that there is no measurement problem, no classical/quantum cut in a minimal interpretation - ie. without BM or MWI. Vanhees71's claim is extremely controversial, and as far as I can tell, it is wrong, and not a matter of taste. The book by Haroche and Raimond rebuts vanhees71's position that decoherence solves the measurement problem.
 
  • Like
Likes eloheim
  • #79
stevendaryl said:
If you take the density matrix of the complete system, and trace out the environmental degrees of freedom, then you end up with a mixed-state density matrix. But then people want to interpret the mixed state using the ignorance interpretation. That doesn't make sense to me--you KNOW that the mixed state didn't arise from ignorance about the true state, because you just created the mixed state by tracing out the environmental degrees of freedom. It seems as though you're willfully forgetting what you just did.

So to me, orthodox QM just doesn't make sense.
This only proves that the talk in orthodox QM about ignorance doesn't make sense. Once one accepts that the mixed state obtained by tracing out the environmental degrees of freedom is all there is to a state of a subsystem, nothing depends anymore on knowledge or ignorance. The mixed state is a complete description of the single system. In rare cases it happens to be a pure state, for example when one looks at a single silver atom in a Stern-Gerlach experiment, projects the state to the region where one of the beams produced lives, and traces over all degrees of freedom except the silver atom spin. Every case of a preparation of a pure state can be explained in a similar way. Thus there is nothing at all that depends on knowledge or ignorance - except the common talk in the textbooks.
 
  • Like
Likes vanhees71
  • #80
A. Neumaier said:
This only proves that the talk in orthodox QM about ignorance doesn't make sense. Once one accepts that the mixed state obtained by tracing out the environmental degrees of freedom is all there is to a state of a subsystem, nothing depends anymore on knowledge or ignorance. The mixed state is a complete description of the single system.

Okay, but a mixed state can potentially describe a nonzero probability of (say) a cat being dead and a cat being alive. Okay, if you don't want to talk about cats, you can replace it by any other two macroscopically distinguishable possibilities. The mixed state formalism can account for a nonzero probability for two different macroscopically distinguishable possibilities. So either both possibilities are real (which to me means many-worlds), or one or the other is real so somehow a single possibility was selected.
 
  • Like
Likes eloheim
  • #81
stevendaryl said:
but a mixed state can potentially describe a nonzero probability of (say) a cat being dead and a cat being alive.
A theoretical mixed state, but not a mixed state realized in Nature according to the tracing out rule given - unless the state of the big system from which this state was obtained by tracing out the environment was already very weird. A mixed state is admissible in the arguments only if we can tell how to prepare them, given the laws of Nature and the tracing out rule. We can do that for pure spin states and for superpositions of tensor products of a few spin states, but even that only in carefully controlled situations. But no apparatus in the universe would prepare a cat in a mixed state of the kind you proposed. At least no known one - which is sufficient to explain why we don't observe these strange things. Nothing needs to be selected since the state cannot be prepared in the first place.
 
  • Like
Likes Mentz114
  • #82
atyy said:
The book by Haroche and Raimond rebuts vanhees71's position that decoherence solves the measurement problem.
In which paragraph or page?
 
  • #83
A. Neumaier said:
A theoretical mixed state, but not a mixed state realized in Nature according to the tracing out rule given - unless the state of the big system from which this state was obtained by tracing out the environment was already very weird.

Well, part of the difficulty here is that we really can't do quantum mechanics with 10^{23} particles except in heuristic ways. So the weirdness is perhaps lost in the complexity. But it seems to me that you could set up a situation in which a microscopic difference (whether an electron is spin-up or spin-down) is magnified to make a macroscopic difference. That's what Schrodinger's cat is about. For that matter, that's what any measurement does. So if you consider it weird for a microscopic difference to be magnified to become a macroscopic difference, then such weirdness is an inherent part of the empirical content of QM.

Suppose you set things up so that:
  • The detection of a spin-up electron leads to a dead cat.
  • The detection of a spin-down electron leads to a live cat.
Then you create an electron that is in a superposition \alpha |up\rangle + \beta |down\rangle, and you send it to the detector. What happens? Well, the Copenhagen interpretation would tell us that macroscopic objects like cats are classical, not quantum. So rather than leading to a superposition of a dead cat and a live cat, what we would get is EITHER a dead cat, with probability |\alpha|^2, or a live cat, with probability |\beta|^2. But that seems inconsistent to me. Why, for small systems, do we get superpositions, rather than alternatives, but for large systems, we get alternatives? That's the weirdness, if not outright inconsistency, of standard quantum mechanics.

Of course, some people claim that decoherence explains why we get alternatives, rather than superpositions, but I don't think it actually does that. What it explains is that superpositions rapidly spread with time: You start off with a single particle in a superposition of states, and then it interacts with more particles putting that composite system into a superposition, and that composite system interacts with the environment (the electromagnetic field) putting it into a superposition of states. The superposition doesn't go away, but it spreads to infect the whole universe (or our little part of it, anyway). But then a trace over everything other than the system of interest gives us what looks like a mixed state, where we can interpret the components of the mixture as alternatives, rather than superpositions.
 
  • Like
Likes Feeble Wonk
  • #84
I think that's a misunderstanding of decoherence. We don't suddenly change the interpretation when we compute reduced density matrices. In fact, we never need to compute the reduced density matrix for decoherence. We could just work with the full quantum state. It's only a matter of convenience to compute the reduced density matrix. Quantum mechanics is a theory that predicts relative frequencies for certain events. It provides us with a probability distribution for each observable. In fact, we could get rid of the Hilbert space and operators completely and reformulate QM purely as a bunch of evolution equations for these probability distributions. Decoherence explains, why those probability distributions don't usually exhibit oscillatory behaviour. For example it explains, why the probability distribution for the throw of a die is ##P_i = \frac{1}{6}## and not rather ##P_1 = P_3 = P_5 = \frac{1}{3}, P_2 = P_4 = P_6 = 0##. So decoherence explains why the probability distributions, that QM predicts, agree with those that we would expect classically.

What more do you expect from a physical theory than a prediction of relative frequencies? And if you don't expect more, then why does QM have problems?
 
  • #85
rubi said:
Quantum mechanics is a theory that predicts relative frequencies for certain events.

I don't think it really does that. Can you say what an "event" is, without making a macroscopic/microscopic distinction?

[edit]What I should have said is that I don't think quantum mechanics gives probabilities (relative or otherwise) without additional assumptions that seem ad hoc.
 
  • #86
stevendaryl said:
I don't think it really does that. Can you say what an "event" is, without making a macroscopic/microscopic distinction?
Yes, I think so: Let ##A## be any observable you want. Let ##\sigma(A)## be its spectrum and ##\psi_a## be the generalized eigenvectors of ##A##. The set of of events for this observable is ##\mathcal B(\sigma(A))##, the smallest sigma algebra that contains all the open sets of ##\sigma(A)## and for each such event ##B##, its probability is given by ##P(B) = \int_B |\left<\psi_a,\Psi\right>|^2\mathrm d a##. For example, ##A## could be the position operator ##\hat x(t)## at time ##t## and ##B## could just be the event "The position at time t lies between 2 and 3", which would mathematically be represented by the interval ##B=(2,3)##. This should account for every event you could think of.
 
Last edited:
  • #87
rubi said:
Yes, I think so: Let ##A## be any observable you want. Let ##\sigma(A)## be its spectrum and ##\psi_a## be the generalized eigenvectors of ##A##. The set of of events for this observable is ##\mathcal B(\sigma(A))##, the smallest sigma algebra that contains all the open sets of ##\sigma(A)## and for each such event ##B##, its probability is given by ##P(B) = \int_B |\left<\psi_a,\Psi\right>|^2\mathrm d a##. For example, ##A## could be the position operator ##\hat x(t)## at time ##t## and ##B## could just be the event "The position at time t lies between 2 and 3", which would mathematically be represented by the interval ##B=(2,3)##. This should account for every event you could think of.

Okay, this is definitely NOT the standard way of presenting quantum mechanics, which is what I had a complaint about. But let me take your presentation. It does not, so far, have any connection to anything with empirical content. To make a connection with something observable, you have to associate probabilities with measurement outcomes. Which means that you have to face the measurement problem, of what does it mean to measure some observable?
 
  • #88
stevendaryl said:
Okay, this is definitely NOT the standard way of presenting quantum mechanics, which is what I had a complaint about.
The formula ##P(B)## I wrote down is just the Born rule. I just wrote it in a way that allows you to directly plug in the events ##B## that you are interested in. I think it is fairly standard, at least we regularly teach it this way at my university.

But let me take your presentation. It does not, so far, have any connection to anything with empirical content. To make a connection with something observable, you have to associate probabilities with measurement outcomes.
The probabilities are given by ##P(B)##. For each observable, QM allows you to compute such a probability distribution. Let's say we measure the spin of a particle. Then you my formula would give you probabilities ##P_\uparrow = P(\{\uparrow\})## and ##P_\downarrow = P(\{\downarrow\})##. These are the probabilities that predict the relative frequencies of spin measurements.

Which means that you have to face the measurement problem, of what does it mean to measure some observable?
I don't understand this question. Can you explain how you would answer this question in the case of classical mechanics and how it would be different from quantum mechanics? What would it mean to measure an observable in CM?
 
  • #89
rubi said:
I don't understand this question. Can you explain how you would answer this question in the case of classical mechanics and how it would be different from quantum mechanics? What would it mean to measure an observable in CM?

To measure an observable means to set things up so that there is a correspondence between possible values of the observable and macroscopically distinguishable states of the measuring device. An example might be a pointer that pivots in a semicircle. Then you set things up so that the angle of the pointer is affinely related to the value of a real-valued observable.

Implicit in this is the assumption that the pointer actually has a definite value. If the pointer could be in a superposition of positions, then I don't know what it would mean to say that it measures an observable. And that's the case with quantum mechanics. If the system being measured is in a superposition of different values of an observable, and you let the system interact with a measurement device, I would expect (if we analyzed the measurement device itself using quantum mechanics) the result to be that the measurement device would be put into a superposition of states. (or that a larger system, including measuring device + environment, would be put into a superposition of states).
 
  • #90
The problem, which to me seems like an inconsistency in the quantum formalism, is that for a small system, such as a single electron, observables don't have definite values, in general. If an electron has spin state \left( \begin{array} \\ \alpha \\ \beta \end{array} \right), what is the z-component of its spin? The question doesn't have an answer. It's in a superposition of spin-up and spin-down. But if you take a macroscopic system such as a detector, and you measure the z-component of the spin, you don't get a superposition of answers, you get either spin-up or spin-down. The macroscopic system has a definite state.

Why do macroscopic systems have definite states, if microscopic systems don't?
 
  • #91
stevendaryl said:
To measure an observable means to set things up so that there is a correspondence between possible values of the observable and macroscopically distinguishable states of the measuring device. An example might be an arrow that pivots in a semicircle. Then you set things up so that the angle of the pointer is affinely related to the value of a real-valued observable.

Implicit in this is the assumption that the pointer actually has a definite value. If the pointer could be in a superposition of positions, then I don't know what it would mean to say that it measures an observable. And that's the case with quantum mechanics. If the system being measured is in a superposition of different values of an observable, and you let the system interact with a measurement device, I would expect (if we analyzed the measurement device itself using quantum mechanics) the result to be that the measurement device would be put into a superposition of states. (or that a larger system, including measuring device + environment, would be put into a superposition of states).
There is a difference between the mathematical formalism and reality. The fact that QM uses the mathematics of Hilbert spaces and superpositions doesn't mean that the concept of superposition somehow applies to real objects. It can only apply to mathematical objects, like vectors in a Hilbert space. The prediction of QM isn't that something is in a superposition. The prediction is rather that we will find the pointer at ##0^\circ## 50% of the time and ##180^\circ## 50% of the time (for example). Superpositions are just an intermediate mathematical tool that allows us to obtain the numerical values for these relative frequencies, much like virtual particles are an intermediate mathematical tool. The correspondence between measurement apparata and mathematics is given by observables. Every apparatus is mathematically represented as a self-adjoint operator. That doesn't mean that the apparatus is a self-adjoint operator, which of course it isn't. We use the phrase "the particle is in a superposition" just as a metaphor. It really means "the relative frequencies that describe the particle can be adequately modeled using the mathematics of superposition".
 
  • #92
rubi said:
There is a difference between the mathematical formalism and reality

Okay, fine. If you want to say that QM is just a recipe for getting answers, that's the "shut up and calculate" interpretation, which is fine, as far as it goes.
 
  • #93
atyy said:
The book by Haroche and Raimond rebuts vanhees71's position that decoherence solves the measurement problem.
I only found this:
"
We can say, according to Einstein terminology, that the death or life of the cat has, even before being recorded by a human mind, become an element of reality (since all entanglement has been destroyed by decoherence), but this element of reality cannot be predicted, only its probability can be estimated. Some physicists find this state of affairs uncomfortable. Others are ready to accept this inherently statistical feature of quantum theory."
 
  • #94
naima said:
I only found this:
"
We can say, according to Einstein terminology, that the death or life of the cat has, even before being recorded by a human mind, become an element of reality (since all entanglement has been destroyed by decoherence), but this element of reality cannot be predicted, only its probability can be estimated. Some physicists find this state of affairs uncomfortable. Others are ready to accept this inherently statistical feature of quantum theory."

If I remember correctly, Haroche and Raimond discuss decoherence and the measurement problem extensively around p81.
 
  • #95
stevendaryl said:
The problem, which to me seems like an inconsistency in the quantum formalism, is that for a small system, such as a single electron, observables don't have definite values, in general. If an electron has spin state \left( \begin{array} \\ \alpha \\ \beta \end{array} \right), what is the z-component of its spin? The question doesn't have an answer. It's in a superposition of spin-up and spin-down. But if you take a macroscopic system such as a detector, and you measure the z-component of the spin, you don't get a superposition of answers, you get either spin-up or spin-down. The macroscopic system has a definite state.

Why do macroscopic systems have definite states, if microscopic systems don't?
If you drop the idea that mathematical terms can be directly applied to real objects ("ceci n'est pas une pipe"), this problem vanishes. A state is a mathematical representation of reality. A particle doesn't really have a position (i.e. a real number). There is no internal counter within the particle or anything like that. The real number that we ascribe to the particle is just our mathematical representation of facts about reality. You need to distinguish these concepts clearly. The idea that a list of real numbers is enough to capture all the details about the reality of a particle is flawed and the violations of Bell's inequality show that this idea can't possibly be saved (BM doesn't save it either). It's impossible for a theory to have definite values for both spin up and spin left if the theory is supposed to agree with experiments. It is a fundamental fact about our world that this can't be done (unless you want to exploit loopholes), so a theory that acknowledges this fact can't be problematic because of this. If anything, the universe is problematic.

Macroscopic systems don't have definite states (a list of real numbers that defines their physics completely) either. It's just that assuming they do is good enough for all practical purposes.

stevendaryl said:
Okay, fine. If you want to say that QM is just a recipe for getting answers, that's the "shut up and calculate" interpretation, which is fine, as far as it goes.
I'm saying that QM satisfies all properties that a physical theory must have and it doesn't have inconsistencies.
 
  • #96
stevendaryl said:
Why do macroscopic systems have definite states, if microscopic systems don't?
What do you say to the answer given in the discussion in posts #83 - #109 of another thread?
 
  • #97
rubi said:
If you drop the idea that mathematical terms can be directly applied to real objects

As I said, that's the "shut up and calculate" interpretation, which I agree works fine.
 
  • #98
rubi said:
I'm saying that QM satisfies all properties that a physical theory must have and it doesn't have inconsistencies.

And I'm saying that I don't agree. You're basically doing the Copenhagen, or shut up and calculate approach, which to me is inconsistent. It requires treating macroscopic objects in a way that is inconsistent with the way that it treats microscopic objects. Since macroscopic objects are presumably made up of microscopic objects, that seems inconsistent to me.

You could say, as the Copenhagen people did, that no, macroscopic objects aren't made of microscopic objects. The microscopic world doesn't exist, it's just a mathematical fiction for doing calculations. That's fine. But then you need a different theory for macroscopic objects in order to build detectors and so forth. What theory is that? Copenhagen said that we basically treat macroscopic objects classically, which is fine as a heuristic. But to have two different theories--one for macroscopic objects and another for microscopic objects--is very distasteful to me.
 
  • #99
stevendaryl said:
As I said, that's the "shut up and calculate" interpretation, which I agree works fine.
But what more do you expect from a physical theory than a prediction of all relative frequencies?

It seems that you want the theory to assign a list of real numbers to each physical entity. This is not possible in our universe. So if the theory fails to do this, we should not blame the theory.

stevendaryl said:
And I'm saying that I don't agree.
If you claim that there is an inconsistency, you should be able to derive a contradiction from QM, i.e. you should be able to derive a statement of the form ##A\wedge\neg A##. Can you tell me what that statement ##A## could be?
 
  • #100
stevendaryl said:
Then you create an electron that is in a superposition α|up⟩+β|down⟩, and you send it to the detector. What happens? Well, the Copenhagen interpretation would tell us that macroscopic objects like cats are classical, not quantum. So rather than leading to a superposition of a dead cat and a live cat, what we would get is EITHER a dead cat, with probability ##|α|^2##, or a live cat, with probability ##|β|^2##. But that seems inconsistent to me. Why, for small systems, do we get superpositions, rather than alternatives, but for large systems, we get alternatives?
The Copenhagen interpretation says (independent of the size of the system) that the state collapses upon measurement, giving the definite outcome rather than the superposition.
 
Back
Top