Register to reply

Wave Function Collapse and Entropy

by stevendaryl
Tags: collapse, entropy, function, wave
Share this thread:
atyy
#19
Feb1-13, 07:41 AM
Sci Advisor
P: 8,657
Quote Quote by Demystifier View Post
If there was nothing resembling the wave function collapse, then one could say that entropy of the subsystem increases, due to decoherence. But the fact is that something resembling the wave function collapse does exist (which decoherence by itself cannot explain, which is why decoherence does not completely solve the measurement problem). For that matter it is not important whether the collapse is related to consciousness (von Neumann), or happens spontaneously (GRW), or is only an illusion (many worlds, Bohmian, etc.), as long as it exists at least in the FAPP sense.
Quote Quote by Demystifier View Post
I think it's not.
I guess it's not obvious to me that the reduced density matrix doesn't involve collapse, since I've seen it said that the Born rule is implicitly used in getting it.
kith
#20
Feb1-13, 08:19 AM
P: 744
Quote Quote by Demystifier View Post
If there was nothing resembling the wave function collapse, then one could say that entropy of the subsystem increases, due to decoherence.
But in most cases, there is nothing resembling wave function collapse. Decoherence occurs whenever the interaction between systems leads to entanglement and not only during measurements.

I'm still wondering how exactly this is related to the observable entropy increases. If we start with two pure states, I guess that any fundamental interaction that leads to maximal entanglement should begin to disentangle the systems afterwards. So I would expect an oscillating entropy for the systems (in classical mechanics, no entropy change arises from such a situation). Which of course would call for an explanation why our observations always take place in the rising entropy domain.
kith
#21
Feb1-13, 08:25 AM
P: 744
Quote Quote by atyy View Post
I guess it's not obvious to me that the reduced density matrix doesn't involve collapse, since I've seen it said that the Born rule is implicitly used in getting it.
If you do an experiment, you get one definite outcome for your observable. The density matrix contains the probabilities for all possible outcomes, so it isn't the final state you perceive.
Demystifier
#22
Feb1-13, 08:45 AM
Sci Advisor
Demystifier's Avatar
P: 4,612
Quote Quote by atyy View Post
I guess it's not obvious to me that the reduced density matrix doesn't involve collapse, since I've seen it said that the Born rule is implicitly used in getting it.
Mathematically, the reduced density matrix is obtained by partial tracing, which technically does not depend on the Born rule. The Born rule only serves as a motivation for doing the partial trace, but formally you can do the partial trace even without such a motivation.

A Born-rule-independent motivation for doing the partial trace is the fact that the evolution of the resulting object (reduced density matrix) does not depend on the whole Hamiltonian, but only on the Hamiltonian for the subsystem.
Demystifier
#23
Feb1-13, 09:00 AM
Sci Advisor
Demystifier's Avatar
P: 4,612
Quote Quote by kith View Post
So I would expect an oscillating entropy for the systems (in classical mechanics, no entropy change arises from such a situation).
In both quantum and classical mechanics (based on deterministic Schrodinger and Newton equations, respectively), it is true that entropy will start to decrease after a certain time, and return arbitrarily closely to the initial state. However, in both cases, the typical time needed for for such a return is many orders of magnitude larger than the age of the Universe.

Quote Quote by kith View Post
Which of course would call for an explanation why our observations always take place in the rising entropy domain.
That's easy to explain. If we happen to live in an entropy-decreasing era, we will naturally redefine the sign of time accordingly, i.e., re-interpret it as an era in which entropy is increasing. That's because our brain, and ability to remember, is also determined by the direction in which entropy increases.

The good question is why the direction in which entropy increases is everywhere the same, i.e., why it is not the case that entropy increases in one subsystem and decreases in another? The answer is that it is interaction between the subsystems which causes them to have the same direction of the entropy increase:
http://arxiv.org/abs/1011.4173v5
kith
#24
Feb1-13, 09:16 AM
P: 744
Quote Quote by Demystifier View Post
In both quantum and classical mechanics (based on deterministic Schrodinger and Newton equations, respectively), it is true that entropy will start to decrease after a certain time, and return arbitrarily closely to the initial state.
Are you talking about the von Neumann and Liouville entropies or about some course grained entropy? I don't see how the Liouville entropy may increase for a subsystem if it remains constant for the whole system.

Quote Quote by Demystifier View Post
If we happen to live in an entropy-decreasing era, we will naturally redefine the sign of time accordingly, i.e., re-interpret it as an era in which entropy is increasing. That's because our brain, and ability to remember, is also determined by the direction in which entropy increases.
Interesting, I haven't thought along these lines before. Can you recommend a not too technical article which expands on this? Also thanks for the link to your article, it looks quite promising. /edit: I'm skimming it right now and it probably simply is this reference I asked about ;-).
Demystifier
#25
Feb1-13, 09:27 AM
Sci Advisor
Demystifier's Avatar
P: 4,612
Quote Quote by kith View Post
Are you talking about the von Neumann and Liouville entropies or about some course grained entropy? I don't see how the Liouville entropy may increase for a subsystem if it remains constant for the whole system.
I am talking about course grained entropy, of course.

Quote Quote by kith View Post
Interesting, I haven't thought along these lines before. Can you recommend a not too technical article which expands on this?
For example, Hawking explains it in his "Brief History of Time". I guess I cannot recommend a less technical literature than that.

A much more refined analysis of that stuff, but still very non-technical, is the book:
H. Price, Time's Arrow and Archimedes Point

Another good related non-technical book is:
D. Z. Albert, Time and Chance

There is also a good non-technical chapter on that in:
R. Penrose, The Emperor's New Mind
atyy
#26
Feb2-13, 12:30 AM
Sci Advisor
P: 8,657
Quote Quote by Demystifier View Post
Mathematically, the reduced density matrix is obtained by partial tracing, which technically does not depend on the Born rule. The Born rule only serves as a motivation for doing the partial trace, but formally you can do the partial trace even without such a motivation.

A Born-rule-independent motivation for doing the partial trace is the fact that the evolution of the resulting object (reduced density matrix) does not depend on the whole Hamiltonian, but only on the Hamiltonian for the subsystem.
If the Born rule isn't used, couldn't one just give an arbitrary reweighting of the sum over the environment and still get an object defined only on the subsystem (ie. is the averaging over the environment unique if the Born rule isn't used?)
stevendaryl
#27
Feb2-13, 07:34 AM
Sci Advisor
P: 2,143
Quote Quote by Demystifier View Post
The good question is why the direction in which entropy increases is everywhere the same, i.e., why it is not the case that entropy increases in one subsystem and decreases in another? The answer is that it is interaction between the subsystems which causes them to have the same direction of the entropy increase:
http://arxiv.org/abs/1011.4173v5
To me, the mystery of entropy is not just that the arrows of time for all parts of the universe are the same, but that the thermodynamic arrow of time is aligned with the cosmological arrow of time. That is, for all parts of the universe, entropy decreases in the direction of the Big Bang.
stevendaryl
#28
Feb2-13, 07:53 AM
Sci Advisor
P: 2,143
Quote Quote by kith View Post
If you do an experiment, you get one definite outcome for your observable. The density matrix contains the probabilities for all possible outcomes, so it isn't the final state you perceive.
Well, a nondeterministic theory cannot possibly describe the final outcome, it can only describe the set of possibilities and their associated probabilities.
stevendaryl
#29
Feb2-13, 08:08 AM
Sci Advisor
P: 2,143
Quote Quote by stevendaryl View Post
Well, a nondeterministic theory cannot possibly describe the final outcome, it can only describe the set of possibilities and their associated probabilities.
The recipe of using the reduced matrix may not imply the collapse interpretation, but it seems that is as close as you can get.

The collapse interpretation says that initially the system is in some state [itex]\vert \Psi\rangle[/itex]. You perform an experiment to measure some observable with eigenvalues [itex]\lambda[/itex] and corresponding eigenstates [itex]\vert \Psi_\lambda\rangle[/itex] (for simplicity, assume non-degeneracy). Then the results are that afterward:

For every value of [itex]\lambda[/itex], there is a probability of [itex]\vert \langle \Psi \vert \Psi_\lambda\rangle \vert^2[/itex] that the system is in state [itex]\vert \Psi_\lambda\rangle[/itex]

This is captured by the density matrix formalism as the transition

[itex]\vert \Psi \rangle \langle \Psi \vert \Rightarrow \sum_\lambda \vert \langle \Psi \vert \Psi_\lambda\rangle \vert^2 \vert \Psi_\lambda \rangle \langle \Psi_\lambda \vert[/itex]
atyy
#30
Feb2-13, 01:56 PM
Sci Advisor
P: 8,657
I guess thinking about it classically, Demystifier's argument must be right. Measurement gives us more information, which is a reduction in entropy. Entropy increases when we forget, according to Landauer's exorcism of Maxwell's demon.

I guess what's not obvious to me is - how much coarse graining do we need, since the partial trace in getting the reduced density matrix is a form of coarse graining?
StarsRuler
#31
May27-13, 01:11 AM
P: 83
Then Von Neumann Entropy and the Shannon Entropy are the same if we average over a time not so little so the relaxing time of the subsystems but little in comparation with the relaxation time of the complete system.

When finished the decoherence proccess, diagonal terms of density operators are null, then entropy grew, but when collapse matters, the entropy can decrease, in the extremal case to a pure state again, or a mix with no more entropy like before start decoherence proccess. If the measurement is not ideal, entropy can increase, obviously not to the entropy of the final of decoherence proccess, because then there is no information gained, and not measurement at all.
StarsRuler
#32
May27-13, 01:37 AM
P: 83
Of course, the wave function is different if we considerate the collapse or not, and the post evolution is too. But the situation is different, if you consider the measurement realized, the posterior wavefunction give the probability conditionated to the measurement result, if you donīt considerate the collapse and considerate the total evolution, you only obtain the probability of measurements, not conditionate to no result of a measurament. All the problems are solved considering that wavefunction is only an instrument to calculate the probabilities of a measurement, in relation with the previous information, and not a real state of the system. The measurement problem doesnīt exist with this consideration, and collapse is perfectly ok
stevendaryl
#33
May27-13, 09:03 AM
Sci Advisor
P: 2,143
Quote Quote by StarsRuler View Post
Of course, the wave function is different if we considerate the collapse or not, and the post evolution is too. But the situation is different, if you consider the measurement realized, the posterior wavefunction give the probability conditionated to the measurement result, if you donīt considerate the collapse and considerate the total evolution, you only obtain the probability of measurements, not conditionate to no result of a measurament. All the problems are solved considering that wavefunction is only an instrument to calculate the probabilities of a measurement, in relation with the previous information, and not a real state of the system. The measurement problem doesnīt exist with this consideration, and collapse is perfectly ok
I don't think that viewing the wave function as merely a matter of information is very satisfying. For one thing, there is interference between alternatives. What does it mean for information to interfere with other information? For another, when people talk about information, it's usually the case that they make a distinction between the state of the world and our information about that state. Bell's theorem shows that there is no sensible notion of "state" that the wave function could be about.

Something that I've thought about that in some ways makes the information-theoretic view more palatable to me, personally, is the "consistent histories" approach. You give up on describing what the state of the world is and the dynamics of that world state, and instead view the object of interest to be histories of observations. Quantum mechanics tells us the relative probabilities of those histories.
rkastner
#34
May27-13, 01:40 PM
P: 161
Quote Quote by stevendaryl View Post
The recipe of using the reduced matrix may not imply the collapse interpretation, but it seems that is as close as you can get.

The collapse interpretation says that initially the system is in some state [itex]\vert \Psi\rangle[/itex]. You perform an experiment to measure some observable with eigenvalues [itex]\lambda[/itex] and corresponding eigenstates [itex]\vert \Psi_\lambda\rangle[/itex] (for simplicity, assume non-degeneracy). Then the results are that afterward:

For every value of [itex]\lambda[/itex], there is a probability of [itex]\vert \langle \Psi \vert \Psi_\lambda\rangle \vert^2[/itex] that the system is in state [itex]\vert \Psi_\lambda\rangle[/itex]

This is captured by the density matrix formalism as the transition

[itex]\vert \Psi \rangle \langle \Psi \vert \Rightarrow \sum_\lambda \vert \langle \Psi \vert \Psi_\lambda\rangle \vert^2 \vert \Psi_\lambda \rangle \langle \Psi_\lambda \vert[/itex]
See the density matrix above: in the transactional interpretation, what you get due to responses from all available absorbers is precisely that density matrix. Each weighted projection operator is an incipient transaction, and its weight is the Born Rule. This is true collapse in the there is no causal account for why one of the projection operators becomes the actualized outcome. However, TI provides a clear physical account of measurement as formulated in von Neumann's 'process 1' (the transition from pure state to density matrix), where no other interpretation can give an account of this other than a decision-theoretic one. The Bohmian theory appears to offer a solution, but it is far from clear that having a 'particle' in one of the channels translates to actualization of that outcome (see, e.g. critique by Wallace and Brown, 2005 ---ref on request).

What I can't understand is why there seems to be so much resistance to this obvious solution. Yes, it involves including advanced field solutions (with negative energies), but you can't get away from that in the relativistic domain anyway. So the basic message is that in order to solve the measurement problem, you need to take into account relativistic processes (emission and absorption) in a direct action picture (basic field propagation is time/energy - symmetric). This is very natural, since it is the most general theoretical formulation. Note that you can regain the empirical asymmetry of radiative processes with appropriate boundary conditions. Then the existence of those BC become theoretically falsifiable, which makes it a stronger theory methodologically -- in contrast to the standard approach in which an asymmetric field theory is simply assumed ad hoc.

See my new book on TI, in particular Chapters 3 and 4, for how TI solves the measurement problem. Sorry the book is rather pricey, but you can get it at many libraries and on interlibrary loan if interested. Also I will provide specially discounted, autographed copies for students with documented finanicial hardship. Contact me through my website to apply for this discount.
http://transactionalinterpretation.o...to-contact-me/
StarsRuler
#35
May27-13, 11:57 PM
P: 83
I dontīknow details about the Bohmian interpretation. But there is no QFT theory for this interpretation. I donīt know if it is beacuse it is not possible or it is not clear that it is possible, but it is a pending question.
StarsRuler
#36
May28-13, 12:00 AM
P: 83
Something that I've thought about that in some ways makes the information-theoretic view more palatable to me, personally, is the "consistent histories" approach
I read the Grifiths Book about consistent histories and it was unsmokable. ŋ Do you know another source for this interpretation more readable

I donīt understand the problem with Bell theorem for the informational interpretation of wavefunction.

The only thing that it no solves is that it imposses a minimun limit for the measurement, but no the real duration of measurement ( maybe it implies to study measurement apparatus), and the possible degeneration in measurement, that can provokes an increased entropy from pure state just before the decoherence start


Register to reply

Related Discussions
Wave function collapse Advanced Physics Homework 1
Collapse of a wave function Quantum Physics 10
Wave function collapse Quantum Physics 2
What is a wave function and its collapse? Quantum Physics 21
Wave Function Collapse Quantum Physics 5