Quantum mechanics and the macroscopic universe .

Nick666
Messages
168
Reaction score
7
Dont know if this is the right place to post this...

Physicist often say classical mechanics can't explain things at subatomic levels.

So, can quantum mechanics ever explain things at the macroscopic level ?
 
Last edited:
Physics news on Phys.org
I wonder the same thing:
Faster than light / quantum entanglement is nowhere to be found in classical objects.
Person's are not dead and not dead at the same time.
None of this make any sense in the macrosopic world?
 
Nick666 said:
So, can quantum mechanics ever explain things at the macroscopic level ?

Much of what we generally consider "macroscopic" physics can be derived from quantum mechanics. One obvíous example would be properties of materials (electrical conductivity etc, and nowadays even mechanical properties). Most solid state physics is "quantum mechanical" to some extent (even if we often tend to use semi-classical approximations).

But, if you are referring to things like superposition of states etc then it is true that this is rarely seen in the macrosopic world. However, it IS possible. Many types of solid-state qubits (quantum bits) are so big (in some cases tens of microns) that you can see them quite easily in an optical microscope.

There are also some even more "exotic" examples such as certain types of detectors for gravitational waves, these can be HUGE (tens of tons!) but since they are cooled to very low temperatures it is still possible to observe "quantum mechanical" properties.
 
Classical mechanics cannot explain (starting from first principle) macroscopic phenomena like ferromagnetism, superconductivity or superfluidity and the same is valid for quantum mechanics.

In general there is no (quantum) way to obtain from first principle the behaviour of a peace of matter of even 1mm cube.

Ciao

mgb2
 
Nick666 said:
So, can quantum mechanics ever explain things at the macroscopic level ?

As so often, this touches upon interpretational issues.

The "obvious" difficulty quantum mechanics has to describe "macroscopic" physics is what Schroedinger already saw, and illustrated it dramatically with his famous cat. The cornerstone of quantum theory is the superposition principle: that the quantum state of things is a superposition of observable "classical" states.

By its very definition, this would run into an obvious problem: how can something *that is macroscopically observed* ever be in "a superposition of observable states" ? How can a cat be in a superposition of "dead" and "alive" ?

There are some "solutions" to this dilemma which are often erroneously taken as possible explanations, but which run into troubles. The first "solution" is this:

1) quantum mechanical superpositions are just a fancy word for probabilities.
So if you say that the "cat is in a superposition of dead and alive", then this simply means that the cat has a certain probability to be dead, and a certain probability to be alive (we simply don't know which one). Of course, this would then solve the issue.

Unfortunately, this is a very common misunderstanding, often promoted by elementary treatments and popularisations of quantum theory. But it is not true that one can equate a quantum-mechanical superposition always with a statistical distribution. Everything which is "typically quantum-mechanical" exactly shows the difference between both. It goes under the name of quantum-mechanical interference. One can show mathematically that no statistical distribution can describe all quantum predictions.

This issue is even more complicated by the fact that the superposition of *outcomes* IS to be considered as a statistical distribution. So people very often fall into the trap of assuming that *any* superposition represents a statistical distribution, but this can be shown to run into problems.

The second "solution" is:
2) interactions on macroscopic scale maybe are always such that the superposition of macro-states just becomes one single observable state. After all, these interactions can be quite complicated, and we can't follow all the details. So, it would simply be a result of the complicated interactions that we never end up with crazy superpositions of "cat dead" and "cat alive".

This is also not possible, at least in the current version of quantum mechanics. The reason is the unitarity of the time evolution operator.
It comes down to this: if initial state |a> gives rise to "live cat", and initial state |b> gives rise to "dead cat", then it is unavoidable that the state |a> + |b> will give rise to a superposition of dead cat and live cat. No matter how complicated U is.

So these are two non-solutions to the problem.

The "solution" by the founders of quantum theory (Bohr in particular) was simply that there is some vague borderline between the "quantum world" and the "classical world". We, human beings, live in the "classical world", which is the only "real" world. But microscopic things can sometimes "quit the classical world", undergo quantum phenomena, and, at the moment of their observation, "re-emerge" in the classical world. We're not supposed to talk about any classical property (such as the particle's position or so) during its "quantum dive", but only during its "preparation", and at the moment of its "observation". The outcome of this observation is statistical, and "re-initialises" the classical evolution from that point on. Cats are also just living in the classical world.

This goes under the name of the Copenhagen interpretation.

Of course, the above position is - although practical of course - philosophically rather unsatisfying, for two reasons: first there is the ambiguity of what is physically happening between "preparation" and "measurement" ("solved" by "you shouldn't talk about it"), but more importantly, the ambiguity of what exactly is a "measurement".

But again, this is the way one does quantum mechanics in practice.

And then, there are other views on the issue, which try to give quantum theory the possibility of giving a coherent description of what is macroscopically "classically" observed. The two that come to mind are Bohmian mechanics, and the Many Worlds Interpretation. I hesitate mentioning the "transactional" interpretation, because I'm not sure it works out completely - but that is maybe just my problem.

Some people think that quantum mechanics needs a modification in order to allow the "non-solution" 2) to apply, namely that complicated interactions give rise to the emergence of a single "outcome state". This can only be achieved by dropping the unitarity condition.

In other words, quantum mechanics as well as classical mechanics are "tangent" theories to a more complete theory which has as "asymptotic" theories quantum mechanics for the microscopic world, and classical physics for the macroscopic world. Attractive as this may seem at first sight, we already know a lot of mathematical difficulties that will arise that way, especially with respect to relativity. So if ever this is the way, it will be a *major* revision of most principles in physics.

There are also "philosophical" views on quantum mechanics, which go a bit in the direction of Copenhagen, but are more sophisticated, and which negate the existence of any objective ontology (not even a classical one). As such, quantum mechanics is just a description of what is subjectively experienced and allows one to build a coherent view of a subjective experience. The relational interpretation goes in that direction.

And finally, there is the "shut up and calculate" attitude, which tells us that all this philosophy doesn't bring in much, that quantum mechanics is a good tool to calculate outcomes of experiments, and that that is good enough. In other words, quantum mechanics is just a mathematical model that seems to do a good job, as is all of physics in the end. One shouldn't give "interpretations" to what is calculated.

A bit in the last direction goes the idea of "emerging properties", which tells us that nature just consists of "russian dolls" of models, which are more or less appropriate for a certain level of phenomena, but that there is no coherent, all-encompassing model which can describe everything - even in principle.
So many phenomena have to be described by quantum mechanics, but on a higher level of "macroscopicity", emerges classical physics without it being able to be *derived* from the underlying quantum-mechanical model, or without there being a more complete theory which has both behaviours as limiting cases.
 
Last edited:
vanesch said:
2) interactions on macroscopic scale maybe are always such that the superposition of macro-states just becomes one single observable state. After all, these interactions can be quite complicated, and we can't follow all the details. So, it would simply be a result of the complicated interactions that we never end up with crazy superpositions of "cat dead" and "cat alive".

Are you talking about (or hinting at) decoherence here?

vanesch said:
This is also not possible, at least in the current version of quantum mechanics. The reason is the unitarity of the time evolution operator. It comes down to this: if initial state |a> gives rise to "live cat", and initial state |b> gives rise to "dead cat", then it is unavoidable that the state |a> + |b> will give rise to a superposition of dead cat and live cat. No matter how complicated U is.

Assuming the context is "decoherence", isn't this solved by focusing on a subsystem and not the whole system, in which the evolution is necessarily unitary? My understanding is that the evolution of the subsystem is not unitary, correct?
 
As usual, vanesch, a very helpful and thoughtful answer.

One question. Isn't another view simply that for objects where the deBroglie wavelength is smaller than a Planck length, it is physically impossible to observe interference effects because it is impossible to distinguish between the two superimposed states?

In other words, take a bitmapped image that simulates gray by alternating between black and white pixels column by column (as opposed to in a checkerboard pattern). But say our resolution is such that we can only see every other column. (We have a really crappy down-rezzing algorithm :)). Then we either see solid black or solid white - apparently randomly. We never see gray because our resolution is simply not good enough and by definition never could be.
 
peter0302 said:
One question. Isn't another view simply that for objects where the deBroglie wavelength is smaller than a Planck length, it is physically impossible to observe interference effects because it is impossible to distinguish between the two superimposed states?

No, remember that interference and related effects is -in general- NOT something that necessarily give rise to real fringes or indeed anything "visible". Quantum mechanics is much more general than that and quantum states do not need to refer to anything "physical", in general they describe what I guess you could call properties of a given system. Hence, the "border" (if you can call it that) between quantum mechanics and the macroscopic world has little to do with the actual size of an object.

It is e.g. possible to perform interferometry in phase space in a way that is completely analogues to optical interferometry in real space.
See e.g.
http://arxiv.org/abs/cond-mat/0512691
 
nanobug said:
Assuming the context is "decoherence", isn't this solved by focusing on a subsystem and not the whole system, in which the evolution is necessarily unitary? My understanding is that the evolution of the subsystem is not unitary, correct?

There is a misunderstanding about what decoherence achieves: it doesn't solve the "and/or" problem by itself. The "trick" with the reduced density matrix begs the question, because the statistical interpretation of the diagonal elements of the reduced density matrix ALREADY assumes that one goes from a superposition to a statistical mixture (otherwise, the partial trace over "the rest" wouldn't make any sense: it only makes sense as a "sum over mutually exclusive events").

So decoherence doesn't solve the issue by itself. It HELPS solve the issue if we propose a framework in which the and/or problem IS treated, such as the many worlds interpretation, and indicates WHY we can take the "superposition-> statistical mixture" transition on "elementary" states without taking into the account the complicated interaction with the environment, and moreover why this happens in the "pointer basis".
 
  • #10
vanesch said:
So decoherence doesn't solve the issue by itself.

I was specifically addressing your claim that the evolution of the system is always unitary and that this presents a problem in the measurement of a "classical" outcomes. I agree with you. However, if one restricts oneself to a subsystem, the evolution is no longer unitary, correct? It is in this case that one can talk about a statistical mixture which to me appears considerably more 'classical' than pure states.
 
  • #11
nanobug said:
I was specifically addressing your claim that the evolution of the system is always unitary and that this presents a problem in the measurement of a "classical" outcomes. I agree with you. However, if one restricts oneself to a subsystem, the evolution is no longer unitary, correct? It is in this case that one can talk about a statistical mixture which to me appears considerably more 'classical' than pure states.

This "statistical mixture" is obtained by taking the reduced density matrix, which is obtained by taking the density matrix of the overall pure state, and make a partial trace over the other degrees of freedom. But - that's what I said - in order even to give the meaning of a density matrix to this "reduced density matrix", with its probability interpretation, one has to give a meaning to this partial trace. Indeed, there's nothing in quantum mechanics by itself that tells us that by taking a partial trace, one obtains something like a density matrix! This only makes sense if we ALREADY interpret the pure state as a statistical mixture, and then make the sum over the mutually exclusive events that correspond to the same outcome of the subsystem, but different outcomes of the other degrees of freedom.
 
  • #12
you might try reading feynman's "QED" - he does an excellent job of showing how the probablities of the quantum world translate into the traditional world of classical physical interaction at the macro level.
 
  • #13
Measurements are almost always uncertain. So, for example, let's measure the width of a piece of paper. I get 7.8", then I get 7.87", and then 7.92, and...This is deterministic?

We don't know what the "real" width is; each measurement is different -- and hardly predictable. Enter the statisticians, who tell us that the best estimate of the width is the average of the measurements. There's at least one way to do better -- the more measurements done, the smaller the standard deviation becomes, and the more accurate the mean becomes. Still, you never know for sure. As long as those measurement errors are around, certainty is an illusion -- like in Plato's cave.

In fact, your visual perception is statistical in nature; a blend of quantum and classical probabilities. Your eyes are in constant motion, scanning the visual field, sometimes guided by other things seen. Sometimes the scanning is random. We see and hear averages, and not the pristine world of classical physics.

Regards,
Reilly Atkinson
 
  • #14
vanesch said:
This only makes sense if we ALREADY interpret the pure state as a statistical mixture

OK, this is the part I am having trouble with.

My understanding of a 'pure state' is that it shows interference (non-zero off-diagonal) terms in a density matrix. Not so with a 'mixed state', in which the off-diagonal terms are zero. As such, how can you interpret the 'pure state' as a 'mixed state'?
 
  • #15
reilly said:
Measurements are almost always uncertain. So, for example, let's measure the width of a piece of paper. I get 7.8", then I get 7.87", and then 7.92, and...This is deterministic? We don't know what the "real" width is; each measurement is different -- and hardly predictable.
This explanation fits nicely with a possible definition of science, e.g.--science = uncertain knowledge of what is real.
 
  • #16
nanobug said:
OK, this is the part I am having trouble with.

My understanding of a 'pure state' is that it shows interference (non-zero off-diagonal) terms in a density matrix. Not so with a 'mixed state', in which the off-diagonal terms are zero. As such, how can you interpret the 'pure state' as a 'mixed state'?

Well, that's exactly the problem: you have to give a probability interpretation in one way or another to the diagonal elements of the density matrix (the overall density matrix) even before you can consider the reduced density matrix of the subsystem as "a density matrix".

In other words, taking the reduced density matrix by itself doesn't solve the and/or problem: you have to have solved it already before this matrix has a meaning!
 
  • #17
vanesch said:
Well, that's exactly the problem: you have to give a probability interpretation in one way or another to the diagonal elements of the density matrix (the overall density matrix) even before you can consider the reduced density matrix of the subsystem as "a density matrix". In other words, taking the reduced density matrix by itself doesn't solve the and/or problem: you have to have solved it already before this matrix has a meaning!

This is a good point but I am not sure it's the only possibility.

Interpreting the diagonal elements of the density matrix as probabilities relies, it would seem to me, on the idea of 'collapse' of the wave function. But this is exactly what decoherence is trying to explain, i.e., that (apparent) collapse. As such, it appears to me that the meaning of the density matrix could be obtained a posteriori from what we get from the reduced density matrix (which is the actually stuff that we measure in the real world). Wouldn't this way of looking at things solve the problem you presented?
 
  • #18
vanesch said:
Well, that's exactly the problem: you have to give a probability interpretation in one way or another to the diagonal elements of the density matrix (the overall density matrix) even before you can consider the reduced density matrix of the subsystem as "a density matrix".

In other words, taking the reduced density matrix by itself doesn't solve the and/or problem: you have to have solved it already before this matrix has a meaning!

Ok, but at least there is a consistency--you give prob. interpretation to the diagonal elemens of the overall density matrix,and you end up interpreting the reduced density matrix as a 'density matrix'.
 
  • #19
gptejms said:
Ok, but at least there is a consistency--you give prob. interpretation to the diagonal elemens of the overall density matrix,and you end up interpreting the reduced density matrix as a 'density matrix'.

Sure! Of course, *once* you give a prob. interpretation to the overall density matrix, then the reduced density matrix also has a sense as "density matrix" with a probability interpretation, and even helps you understand why there are no *visible* interference effects anymore.

But sometimes people claim that decoherence gives an explanation for the probability interpretation itself, and that's not true because you need it already before.
 
  • #20
vanesch said:
Sure! Of course, *once* you give a prob. interpretation to the overall density matrix, then the reduced density matrix also has a sense as "density matrix" with a probability interpretation, and even helps you understand why there are no *visible* interference effects anymore.

But this tells you that the prob. interpretation is a good one to start with--as a result of this the reduced density matrix can be interpreted as a 'density matrix' plus you knock off (visible) interference effects--not a bad bargain!
 
  • #21
gptejms said:
But this tells you that the prob. interpretation is a good one to start with--as a result of this the reduced density matrix can be interpreted as a 'density matrix' plus you knock off (visible) interference effects--not a bad bargain!

Of course! We all know that in one way or another, the probability interpretation is correct! It is the way one verifies quantum mechanics against experiments. The only question is how it comes about! Now if people say, it comes about *because of decoherence*, then that's wrong. But if we have *another* way of explaining the occurrence of probabilities (as "experience of worlds" or "nonlinearities and projection" or even the "transit from quantum to classical"), then decoherence helps making the story "coherent", as you point out.
 
  • #22
vanesch said:
Of course! We all know that in one way or another, the probability interpretation is correct! It is the way one verifies quantum mechanics against experiments. The only question is how it comes about! Now if people say, it comes about *because of decoherence*, then that's wrong. But if we have *another* way of explaining the occurrence of probabilities (as "experience of worlds" or "nonlinearities and projection" or even the "transit from quantum to classical"), then decoherence helps making the story "coherent", as you point out.

Agreed--decoherence only helps make the story coherent.But I feel *another* way of really 'explaining' occurence of probabilities will remain a distant dream--we have to contend with the fact that it is only this far that physics gets you!
 
  • #23
vanesch said:
Of course, *once* you give a prob. interpretation to the overall density matrix, then the reduced density matrix also has a sense as "density matrix" with a probability interpretation

Once again, I disagree.

The probability interpretation is an operational interpretation, i.e., made to match measurements. However, measurements always depend on the process of decoherence. It is therefore the process of decoherence that forces one to operationally interpret the reduced density matrix as 'probabilities'. After all, the full density matrix is inaccessible to measurement and only though the window of the reduced density matrix and via the process of decoherence one may peek inside.

In other words, it seems to me that technically it is not necessary to postulate a probabilistic interpretation of the density matrix a priori. That insight can be gained a posteriori after one goes through decoherence via the reduced density matrix.
 
  • #24
peter0302 said:
In other words, take a bitmapped image that simulates gray by alternating between black and white pixels column by column (as opposed to in a checkerboard pattern). But say our resolution is such that we can only see every other column. (We have a really crappy down-rezzing algorithm :)). Then we either see solid black or solid white - apparently randomly. We never see gray because our resolution is simply not good enough and by definition never could be.

Haha, check this out . Look at it, than move back 5-6 meters and look at it :) . Weird.

http://minutillo.com/steve/weblog/images/hybrid-face.jpg

Thanks for all the answers, guys .

I know that one of the interpretation of Bell's theorem experiments, is that logic isn't valid (correct me on this one) .

Why would one say that ?? Why not just say: logic at the quantum level is different from the logic at the macroscopic level .
 
Last edited by a moderator:
  • #25
Nick666 said:
Haha, check this out . Look at it, than move back 5-6 meters and look at it :) . Weird.

http://minutillo.com/steve/weblog/images/hybrid-face.jpg

Thanks for all the answers, guys .

I know that one of the interpretation of Bell's theorem experiments, is that logic isn't valid (correct me on this one) .

Why would one say that ?? Why not just say: logic at the quantum level is different from the logic at the macroscopic level .

Beware of not totally mixing up things: optical illusions have nothing to do with Bell's theorem or with logic.
 
Last edited by a moderator:
  • #26
nanobug said:
Once again, I disagree.

The probability interpretation is an operational interpretation, i.e., made to match measurements. However, measurements always depend on the process of decoherence. It is therefore the process of decoherence that forces one to operationally interpret the reduced density matrix as 'probabilities'. After all, the full density matrix is inaccessible to measurement and only though the window of the reduced density matrix and via the process of decoherence one may peek inside.

Sure. But...

In other words, it seems to me that technically it is not necessary to postulate a probabilistic interpretation of the density matrix a priori. That insight can be gained a posteriori after one goes through decoherence via the reduced density matrix.

Yes, but the *very definition* of the reduced density matrix relies on the probabilistic interpretation of the overall wavefunction.

Imagine the system and the environment as systems A and B.

Now, at a certain point, we will have as an overall wavefunction:

|psi> = a |a1> |b1> + b |a2>|b2> + ... + c |a3> |b3>

Now, consider a "measurement basis" for the environment: {|c1>, |c2> ...}.

This means that we can express |b1> in this basis, |b2> in this basis etc...

|psi> = a |a1> {c11 |c1> + c12 |c2> + ... } + b |a2> { c21 |c1> + c22 |c2> + ... } + ...

Now, consider that we do a FULL measurement on this state, in the basis {|a1>|c1>, |a>|c2>, ... ,|a2> |c1>, ...}. That is, we consider that we do a measurement on the system as well as on the environment.

We now USE the probability interpretation, which tells us that we will find, with probability |a c11|^2, that the outcome is |a1>|c1>, with probability |a c12|^2 that the outcome is |a1> |c2> etc...

These are mutually exclusive results. But consider now that we only calculate the probability to have |a1>, no matter what the environment is in. In that case, we SUM the probabilities over the mutually exclusive events which satisfy the wanted outcome |a1>, so we sum |a c11|^2 + |a c12|^2 + ...

Well, this sum is nothing else but the (1,1) diagonal element of the reduced density matrix. In the same way, the probability for |a2> will be:
|b c21|^2 + |b c22|^2 + ...
which is nothing else but the (2,2) diagonal element of the reduced density matrix.

But in order to call this number "the probability to have outcome |a2>, no matter what state the environment is in", we already needed to interpret the expansion of |psi> in a probabilistic way, so that we could make the sum over probabilities of exclusive events.

Now, the off-diagonal element (1,2) of the reduced density matrix will be:
a b* x {c11 c21* + c12 c22* + ... } which will reduce to 0 if |b1> and |b2> are orthogonal. THIS is what decoherence tells us, that the environment states that entangle with the system states in a measurement, will essentially be orthogonal and remain so.

As such, we can interpret the reduced density matrix as the "density matrix" of a statistical mixture of systems only, which only have diagonal elements, and which can be interpreted as probabilities (and hence, that this reduced density matrix behaves "well" as a density matrix).
 
  • #27
So far not.
 
  • #28
Nice picture, Nick. :)

Have to chime in again. The fact is that an interference pattern from an object with sufficiently small de Broglie wavelength will be indistinguishable from a gaussian pattern. If you did a double slit experiment with bullets, but assumed quantum effects applied, you could, in theory, map out an interference pattern. However, the fringes would be so close together that you could never actually observe them in practice.

Why do we assume the bullets are not exhibiting quantum effects merely because we cannot see them?
 
  • #29
vanesch said:
Yes, but the *very definition* of the reduced density matrix relies on the probabilistic interpretation of the overall wavefunction.

I think we are essentially in agreement with the exception of the following "twist": what I am suggesting is that by interpreting the reduced density matrix in a probabilistic way (given that this is what we measure and what we measure "forces" the probabilistic interpretation on us) we may then work "backwards" and conclude that the overall wave function has to be also interpreted in a probabilistic way for things to be self-consistent.

In the form of a question: if the reduced density matrix is given the probabilistic interpretation is it possible for the full wave function to be interpreted in any way other than probabilistic and keep the formalism consistent?
 
  • #30
nanobug said:
I think we are essentially in agreement with the exception of the following "twist": what I am suggesting is that by interpreting the reduced density matrix in a probabilistic way (given that this is what we measure and what we measure "forces" the probabilistic interpretation on us) we may then work "backwards" and conclude that the overall wave function has to be also interpreted in a probabilistic way for things to be self-consistent.

Ah, yes, that's a way to see things.

In the form of a question: if the reduced density matrix is given the probabilistic interpretation is it possible for the full wave function to be interpreted in any way other than probabilistic and keep the formalism consistent?

Probably not. It rings a bell called Gleason's theorem, but I don't know in how much this is air-tight.
 
  • #31
Is it incorrect to state that the Schroedinger's Cat thought experiment is a metaphor? My understanding is that "One can therefore assert that a quantum superposition of macroscopic states is never produced in reality (Roland Omnes Understanding Quantum Mechanics when discussing decoherence)" and therefore, a macroscopic description of a quantum superposition most likely is imperfect. While Schroedinger's Cat is useful, isn't is "just" a necessarily imperfect description of the micro realm by using a metaphor to describe a situation that our "classic" realm brains can attempt to appreciate? Is this an incorrect position?
 
Last edited:
  • #32
Sample1 said:
Is it incorrect to state that the Schroedinger's Cat thought experiment is a metaphor? My understanding is that "One can therefore assert that a quantum superposition of macroscopic states is never produced in reality. (Roland Omnes Understanding Quantum Mechanics when discussing decoherence)" and therefore, a macroscopic description of quantum superposition most likely is imperfect. While Schroedinger's Cat is useful, isn't is "just" a necessarily imperfect description of the micro realm by using a metaphor to describe a situation that our "classic" realm brains can attempt to appreciate. Is this an incorrect position?

Actually someone once told me that it was actually more of a wry observation by Schrödinger, who had a hard time coming to terms with quantum indeterminency, his demonstration was meant simultaneously to show what was happening in the quantum world, and to slightly poke fun at the seemingly absurd notion of superposition.

That said though yes it is a thought experiment, and no it shouldn't be applied to classical set ups, but as you say is a good analogy of what is going on at the quantum level, because we don't necessarily have the right language to precisely explain it in quantum terms. And not everyone is versed in the complexity of mathematics, nor can immediately grasp such mathematical concepts.
 
  • #33
Sample1 said:
One can therefore assert that a quantum superposition of macroscopic states is never produced in reality

That is not correct. One can formalise that statement into something called the "Legget criteria" which can be used to "test" for macroscopic quantum coherence. Over the past 20 years or so many systems have been shown to fulfill them, i.e. quantum superpositions of macroscopic states DO exist.
The most obvious demonstration of this is solid state qubits, which are obviously macroscopic and still exhibits quantum coherence (which is why they are called qubits).
However, the earliest example was -as far as I know- demonstration of macroscopic quantum tunnelling in Josephson junctions in the early eighties (1983-1984?) which ALSO fulfills the Legget criteria.

There is a long discussion about this in Takagi's book
"Macroscopic Quantum Tunneling ", £22 from Amazon.
 
  • #34
Over the past 20 years or so many systems have been shown to fulfill them, i.e. quantum superpositions of macroscopic states DO exist.​

The full quote is: One can therefore assert that a quantum superposition of macroscopic states is never produced in reality. Decoherence is waiting to destroy them before they can occur. The same is true for Schroedinger's cat: the stakes are put down as soon as a decay has been detected by the devilish device, before the poison phial is broken. The cat is only a wretched spectator. (Roland Omnes, Understanding Quantum Mechanics)

f95toli: I was aware of solid state quibits. I should have emphasized that I am referring to a complex macroscopic system, like a living cat.

Does that help? I will check out Takagi's book, thanks.

Schroedinger's Dog, thanks for your reply.
 
Last edited:
  • #35
The only place a cat can be alive and dead at the same time is in the human mind. We can imagine a world that does not and cannot exist in whatever passes for human reality. Further, the Schrodinger cat scenario has nothing to do with the details or nuances of quantum theory; instead it has to do with the basic nature of applied probability theory.

The state of the cat is conditional on the state of the killing device. This device could be triggered by some nuclear decay, or whether the Red Sox won on a particular day by the score of 3 to 1. It's all about conditional probability, which, of course, is purely in your mind. You know that if the killing device is on, the cat is dead. If the device is off, the cat is alive. But, common sense and experienced says that in Nature we never observe a cat that is simultaneously alive and dead. Why invent such a mythical creature? It's simply, if A then B; if not A then not B. Why invent such a thing, one that is "not B and B" all at once? Occam says a cat is alive, or dead but not both.

QM is indeed odd, but not as odd as a simultaneously alive and dead creature.Regards,
Reilly Atkinson
 
  • #36
Appreciate all the comments. I'm new to these forums. Looking forward to lurking around. Path integrals anyone? Joking...
 
  • #37
Sample1 said:
Over the past 20 years or so many systems have been shown to fulfill them, i.e. quantum superpositions of macroscopic states DO exist.​

The full quote is: One can therefore assert that a quantum superposition of macroscopic states is never produced in reality. Decoherence is waiting to destroy them before they can occur.

But really, it depends on what you mean by "macroscopic". Does 10^11 particles making up the "object" be considered as "macroscopic"? If it does, then it has been done in the Delft/Stony Brook experiments. And in fact, the most recent proposal coming from Penrose and company would go even much larger than that using a set of mirrors.

Zz.
 
  • #38
Sample1 said:
Is it incorrect to state that the Schroedinger's Cat thought experiment is a metaphor? My understanding is that "One can therefore assert that a quantum superposition of macroscopic states is never produced in reality (Roland Omnes Understanding Quantum Mechanics when discussing decoherence)"

Well, there are those people who claim that the macroscopic world is NOT ruled by quantum mechanics (that is, the superposition principle is not valid), and there are those that claim that the macroscopic world as well as the microscopic world are ruled by the same physical theory.

The first ones have to explain where quantum mechanics stops to be valid, and how and why it links to the macroscopic theory etc...

The second ones have to explain how it comes that we don't OBSERVE obvious superpositions. One can find such an explanation, and such a view is called a "many worlds" view. Indeed, the misunderstanding of Schroedinger with his cat, and others, is to think that, for instance, *within the same environment* one will see some kind of ghostly mixture of a dead and a live cat, for instance. But this is not what would happen, if quantum mechanics were true on the macroscopic level: quickly, one macroscopic state (say, live cat) would entangle with its environment, including the "observer", and produce ONE set of consistent states, and the other macroscopic state (dead cat) would entangle DIFFERENTLY with the environment, to produce an entirely different but consistent set of states, ALSO including the "observer" (but in a different state now).

So each individual "observer state" would only see ONE thing: the first state would be such that it is consistent with having seen a live cat, and the second observer state would be in a state consistent with having seen a dead cat. NO observer state would be present that "sees both at the same time". And so, no, quantum mechanics does NOT predict, even on the macroscopic level that an observer would SEE "a cat both alive and dead at the same time".
 
  • #39
vanesch said:
Well, there are those people who claim that the macroscopic world is NOT ruled by quantum mechanics (that is, the superposition principle is not valid), and there are those that claim that the macroscopic world as well as the microscopic world are ruled by the same physical theory.

The first ones have to explain where quantum mechanics stops to be valid, and how and why it links to the macroscopic theory etc...

The second ones have to explain how it comes that we don't OBSERVE obvious superpositions. One can find such an explanation, and such a view is called a "many worlds" view. Indeed, the misunderstanding of Schroedinger with his cat, and others, is to think that, for instance, *within the same environment* one will see some kind of ghostly mixture of a dead and a live cat, for instance. But this is not what would happen, if quantum mechanics were true on the macroscopic level: quickly, one macroscopic state (say, live cat) would entangle with its environment, including the "observer", and produce ONE set of consistent states, and the other macroscopic state (dead cat) would entangle DIFFERENTLY with the environment, to produce an entirely different but consistent set of states, ALSO including the "observer" (but in a different state now).

So each individual "observer state" would only see ONE thing: the first state would be such that it is consistent with having seen a live cat, and the second observer state would be in a state consistent with having seen a dead cat. NO observer state would be present that "sees both at the same time". And so, no, quantum mechanics does NOT predict, even on the macroscopic level that an observer would SEE "a cat both alive and dead at the same time".


First, at the macroscopic level quantum superposition states are very close in energy -- per the usual assumptions about reservoirs. Further, the environment will interact with the object, and will create thermal fluctuations. For both a classical statistical approach or a quantum one, the upshot is that the actual state "seen" will be an average one. How so? I'll start with 1. the important states -- to be seen -- are essentially degenerate. Then, I'll assume that The thermal fluctuations can be modeled as a boson field, which creates a random walk of the macroscopic object in momentum space. A random walk interaction in momentum space, using degenerate perturbation theory, gives a single state with non-zero energy, the average state in fact. The other states correspond to quasi- particles with zero energy. So we see the average. We do the same thing ascribing a continuous nature to electric current.

In fact, given the way our visual system works, we actually see an average of an average. Seems to me that for macroscopic objects, the quantum fluctuations are dwarfed by the thermal fluctuations -- something like a quantum fluctuation of 0.01 ev vs a thermal fluctuation of maybe 100 ev -- from a speeding molecule. Again, seems to me that a complete analysis of seeing macroscopic objects will show clearly why we don't see macroscopic superpositions thereof -- we don't have the necessary resolution to do so.

(This is very similar to the basics of superconductivity.)

Note: I've tried numerous times to find an understandable account of decoherance, without much luck. So, the extent to which my ideas are in consonance or dissonance with decoherance is a mystery to me.

Regards,
Reilly Atkinson
 
  • #40
reilly said:
I've tried numerous times to find an understandable account of decoherance, without much luck.

I think this webpage does a decent job at providing a somewhat intuitive view of decoherence:

http://www.ipod.org.uk/reality/reality_decoherence.asp
 
Last edited by a moderator:
  • #41
nanobug said:
I think this webpage does a decent job at providing a somewhat intuitive view of decoherence:

http://www.ipod.org.uk/reality/reality_decoherence.asp

That article has an awful lot of words; I was hoping for a more concise description. From what I can tell, the notion of decoherence is very similar to getting to the target state of non-equilibrium statistical mechanics system, which, of course is what my last post is about.

Further, I saw nothing about collapse in the classical case. That is, for example, before the conclusion of a football game, at best we can know the estimated probability of,say, the Seattle Seahawks winning over the Oakland Raiders. Once the game is concluded, the initial probability of winning becomes the certainty of winning. The probability system valid prior to the win, collapses to a 0-100%, from, maybe, 57% probability that the Seahawks win.

Collapse is the handmaiden of any probability system -- because we are talking about the application of probability before and after some event. At the minimum, this event will result in a new probability system, conditional on the event.

Another strong reason for thermal effects in macroscopic measurements, at least for human vision, is that the light we see is a superposition of many photon coherent states. This means that Poisson processes are at work -- we are talking quantum E&M fields of classical currents -- which means the light with which we see is generated by random processes, which, I surmise, tend to behave along the lines of stochastic convergence.

And, the rods and cones of your eye are basically photoelectric detectors, and are quantum devices. There's tons of noise, in the sense of a communication system. As Shannon intuited, and Feinstein proved, the best way to beat noise is to take an average(s), and that's exactly what your visual system does. Both spatial and temporal averages are used. Not only that, but the samples involved are typically large, so that the standard deviations of the means involved are very small. (This is very nicely explained in Dowling's The Retina; and in Shannon and Weaver's Communication Theory.)


It seems reasonable to assert that thermal/random pertubations on many systems will result in convergence to the mean -- stochastic convergence, of course --, and thus a single value for a measurement is explainable.

However, it seems to me that there are plenty of systems not so amenable to experimental certainty. For example, consider a variant on the Kramers double well problem. For simplicity, consider two identical potential wells connected by a a barrrier.

That is, __ the potential looks like below.
________ | | __________________
|__| |__|

The wells both go from v=0 to -V, with width L, while the barrier goes fvrom -V to V', with width L'. Assume that the wells are deep enough to have bound states, and that V' >>V. Can you demonstrate how decoherence solves the "collapse" issue in such a set up

We are, of course, talking scattering, which here can have four basic outcomes. The particle, incident from the left, can end up captured in either one of the wells, can be buried in the barrier, or can proceed off to the right as a free particle. What can decoherence tell us about the outcomes?

Regards,
Reilly Atkinson
 
Last edited by a moderator:
  • #42
Just a random question but wouldn't light hitting our retina have decohered long before then? Being as it travels through the lens and the aqueous and vitreous humour? Not to mention through an atmosphere. Why would your photoreceptors be quantum devices?
 
Last edited:
  • #44
reilly said:
Further, I saw nothing about collapse in the classical case. That is, for example, before the conclusion of a football game, at best we can know the estimated probability of,say, the Seattle Seahawks winning over the Oakland Raiders. Once the game is concluded, the initial probability of winning becomes the certainty of winning. The probability system valid prior to the win, collapses to a 0-100%, from, maybe, 57% probability that the Seahawks win.

Of course, decoherence and other interpretations don't mean anything if you think that a quantum state is already a statistical ensemble...

The whole point in all these things is to try to give a picture of how a *single actual physical state* gives rise to a *statistical distribution* over a set of potential states: it is the whole interpretational difficulty.

But my question to you is: IF, as you claim regularly, a quantum state is nothing else but a statistical distribution (the particle came through one of the slits, only we didn't know which one), then why don't we work directly with the probability distributions ? Why do we bother using amplitudes ? In what way does quantum mechanics differ then from classical statistical mechanics ?

See, in a state: |az+> |bz-> - |az-> |bz+>, why don't we say that we have 50% of |az+> |bz-> and 50% of |az-> |bz+>, and why don't we work out the consequences *in the first case*, then the consequences *in the second case* and consider that the observed statistics will be a 50%-50% mixture of these consequences ?
Answer: because this doesn't work out!
 
Back
Top