New experimental proof of wave-function collapse?

  • #51
Edward Wij said:
What interpretations is it a problem and what interpretations is it not a problem? I think it's not a problem with Bill Ensemble because the mere fact there is outcome is measurement problem solved for him. So let's handle the others for us who are not Ensemblers.

First thanks to Atty for giving the answer.

What its a problem with and not a problem I think depends on future research.

As I have always said at present its just a possible problem - a lot more work needs to be done. Just as an example of one way out of the difficulty (assuming it is a difficulty - which I am not that sure of) is maybe systems like say the CBMR would be strongly influenced over time by the QFT vacuum but large scale macro objects like you and me would not be. That could be the natural way to factor systems. Just an idea to show it really needs a lot more investigation.

Thanks
Bill
 
Physics news on Phys.org
  • #52
bhobba said:
First thanks to Atty for giving the answer.

What its a problem with and not a problem I think depends on future research.

As I have always said at present its just a possible problem - a lot more work needs to be done. Just as an example of one way out of the difficulty (assuming it is a difficulty - which I am not that sure of) is maybe systems like say the CBMR would be strongly influenced over time by the QFT vacuum but large scale macro objects like you and me would not be. That could be the natural way to factor systems. Just an idea to show it really needs a lot more investigation.

Thanks
Bill

If you have a pure state. Isn't it there is no interference when considering just the subsystem.. why do you need to make separate non-interference via the randomness in the quantum vacuum? Or rather.. in a pure state, what does it mean to randomize phases.. would you still get pure state.. or would randomizing phases produce the born rule causing collapse?
 
  • #53
Edward Wij said:
If you have a pure state. Isn't it there is no interference when considering just the subsystem.. why do you need to make separate non-interference via the randomness in the quantum vacuum? Or rather.. in a pure state, what does it mean to randomize phases.. would you still get pure state.. or would randomizing phases produce the born rule causing collapse?

Did you read the preceding chapters like I said please, please read?

When one traces over the environment intuitively the random phase relative to the phase of what's being observed scrambles it leading to an average phase of a big fat zero ie no interference terms.

As chapter 26 said:
'if the sum includes a large number of random phases, |α| can be quite small. Hence a random environment can produce decoherence even in circumstances in which a non-random environment (as discussed in Secs. 26.2 and 26.3) does not.'

But to understand that please please read the preceding chapters like I asked. It will take time and effort - but there is no short-cut.

The quantum vacuum example was just a way we get a random environment which is what Ruth has an issue with - she thinks that the random phases need an explanation - I don't - or rather believe its explained by QM and reasonable statistical reasoning. Intuitively, in the examples of photons, they will be absorbed and re-emitted many times before interacting with something. The re-emission will be random via its interaction with the quantum vacuum, hence the phases relative to what's being observed will be random. Note - this is intuitive - what is really going on is much more complex:
https://www.physicsforums.com/threads/do-photons-move-slower-in-a-solid-medium.511177/

This is why I would like someone more conversant with this stuff (ie QFT and solid state physics) to comment.

Thanks
Bill
 
Last edited by a moderator:
  • #54
bhobba said:
Did you read the preceding chapters like I said please, please read?

When one traces over the environment intuitively the random phase relative to the phase of what's being observed scrambles it leading to an average phase of a big fat zero ie no interference terms.

As chapter 26 said:
'if the sum includes a large number of random phases, |α| can be quite small. Hence a random environment can produce decoherence even in circumstances in which a non-random environment (as discussed in Secs. 26.2 and 26.3) does not.'

But to understand that please please read the preceding chapters like I asked. It will take time and effort - but there is no short-cut.

The quantum vacuum example was just a way we get a random environment which is what Ruth has an issue with - she thinks that the random phases need an explanation - I don't - or rather believe its explained by QM and reasonable statistical reasoning.

Thanks
Bill

I've read it. I know tracing over the environment introduces the born rule.. and I was asking if randomizing the quantum vacuum can act like tracing over the environment.
 
  • #55
Edward Wij said:
I've read it. I know tracing over the environment introduces the born rule.. and I was asking if randomizing the quantum vacuum can act like tracing over the environment.

No.

Its an (intuitive) explanation of why the phases are random.

BTW I suggest you reread it again - tracing over the environment doesn't lead to the Born rule from that account - you may be getting confused with some stuff of Zureck.

Thanks
Bill
 
  • #56
bhobba said:
No.

Its an (intuitive) explanation of why the phases are random.

BTW I suggest you reread it again - tracing over the environment doesn't lead to the Born rule from that account - you may be getting confused with some stuff of Zureck.

Thanks
Bill

Ok. Do you have any references about the role of spacetime in all this.. because without spacetime, there is no environment or even systems and may be involved in factoring. This would begin my quest on quantum spacetime and how to connect the two together in pondering on the measurement problems.
 
  • #57
Edward Wij said:
Ok. Do you have any references about the role of spacetime in all this.. because without spacetime, there is no environment or even systems and may be involved in factoring. This would begin my quest on quantum spacetime and how to connect the two together in pondering on the measurement problems.

Yes - read any QFT textbook. Space-time as per Special Relativity is what its built on.

Here is a text:
https://www.amazon.com/dp/019969933X/?tag=pfamazon01-20

Be aware however is its a step above what we have been discussing here.

Added Later:
The following book examines its relation to the measurement problem:
https://books.google.com.au/books?id=tzYC0KAJot4C

Thanks
Bill
 
Last edited by a moderator:
  • #58
bhobba said:
Yes - read any QFT textbook. Space-time as per Special Relativity is what its built on.

Here is a text:
https://www.amazon.com/dp/019969933X/?tag=pfamazon01-20

Be aware however is its a step above what we have been discussing here.

Thanks
Bill

I've read it. But isn't it QFT is just an effective field theory. The real theory of quantum spacetime *may* have lower energy consequences not predicted by QFT which can even serve as the collapser of wave functions. Maybe have to ask this in the Beyond Standard Model forum. But then just for context. Would like to know what you think of it and whether there are references of beyond QFT along these theme.
 
Last edited by a moderator:
  • #59
Edward Wij said:
I've read it. But isn't it QFT is just an effective field theory.

Hmmmm. I think you are misunderstanding things in those texts because QFT is not an effective field theory - effective field theory's are examples of QFT's. We have zero evidence that the much vaunted theory of everything that lifts the veil beyond about the plank scale may not be a QFT - string theory for example is a QFT - but more general than the usual QFT in 3+1 dimensions. Although I have read string theory may be a bit different in that ordinary QM may be sufficient for its description - but I am not expert enough to say and some say QFT and string theory are the same thing.

I think it might be wise for you to more carefully study those texts, and the one by Griffiths, before I discuss it with you again.

Thanks
Bill
 
Last edited:
  • #60
Edward Wij said:
What interpretations is it a problem and what interpretations is it not a problem? I think it's not a problem with Bill Ensemble because the mere fact there is outcome is measurement problem solved for him. So let's handle the others for us who are not Ensemblers.

Here are my thoughts on whether factoring the universe into apparatus/system matters.

1) Bohmian Mechanics - not a problem, because the apparatus/system divide is subjective, but the outcomes are objective

2) Many-Worlds - I think it could be a problem, if we keep in mind that decoherence is never perfect, and we want MWI to explain why we see a classical world, ie. why it is unlikely for conscious observers to see a non-classical world

3) Copenhagen - not a problem, because Copenhagen admits it has a measurement problem

4) bhobba's Ensemble - I would like to have bhobba comment. I think it is a problem, because Ensemble is essentially Copenhagen, and the cut should be shiftable and subjective. If the cut if shiftable, won't any cut that is placed by decoherence be too objective?
 
  • #61
atyy said:
bhobba's Ensemble - I would like to have bhobba comment. I think it is a problem, because Ensemble is essentially Copenhagen, and the cut should be shiftable and subjective. If the cut if shiftable, won't any cut that is placed by decoherence be too objective?

Its a problem for my ignorance ensemble, indeed most interpretations (BM is the only one I can think of that it isn't - but there may be others) that uses decoherence. Its not a problem for Ballentine's ensemble because an observation simply selects an outcome from the conceptual ensemble of outcomes with that state and observable. Remember Ballentine doesn't believe decoherence has anything to say about interpretation issues - its a very real phenomena of course - and he thinks its VERY VERY important to the practical realisation of quantum computers - but of no interpretational relevance.

Thanks
Bill
 
  • #62
bhobba said:
Hmmmm. I think you are misunderstanding things in those texts because QFT is not an effective field theory - effective field theory's are examples of QFT's. We have zero evidence that the much vaunted theory of everything that lifts the veil beyond about the plank scale may not be a QFT - string theory for example is a QFT - but more general; than the usual QFT in 3+1 dimensions. Although I have read string theory may be a bit different in that ordinary QM may be sufficient for its description - but I am not expert enough to say and some say QFT and string theory are the same thing.

I think it might be wise for you to more carefully study those texts, and the one by Griffiths, before I discuss it with you again.

Thanks
Bill

The following entry in wiki is wrong then (you'd better correct it)?

http://en.wikipedia.org/wiki/Quantum_field_theory

"Quantum field theory of the fundamental forces itself has been postulated to be the low-energy effective field theory limit of a more fundamental theory such as superstring theory."
 
  • #63
Edward Wij said:
"Quantum field theory of the fundamental forces itself has been postulated to be the low-energy effective field theory limit of a more fundamental theory such as superstring theory."

English was never my best subject - I in fact failed it at High School.

But can you please read this stuff with your thinking cap on and cognate on 'Quantum field theory of the fundamental forces'

Thanks
Bill
 
  • #65
bhobba said:
Its saying nothing can happen in MW. Yet we have things like vacuum fluctuations causing inherent randomness. Since MW is cooked up to be indistinguishable from standard QM it should include that. So I don't necessarily accept that papers analysis as correct. In saying that, I am appealing to Quantum Field Theory which I am not as familiar with as I would like. I would like someone with more knowledge of QFT to comment of the exact cause of random vacuum fluctuations.
I think you misunderstood the concept of a vacuum fluctuation. Here the word "fluctuation" does not refer to a time-dependent process. It is merely a statistical fluctuation, meaning only that some probability distribution is not a delta-function, i.e. that the probability distribution assigns a finite probability to a value different from the average value. The vacuum fluctuation is very similar to the fact that quantum harmonic oscillator in the ground state has a finite probability to be at a position x not equal to 0.
 
  • #67
Demystifier said:
The vacuum fluctuation is very similar to the fact that quantum harmonic oscillator in the ground state has a finite probability to be at a position x not equal to 0.

Yes.

My point is it is generally assumed, for example, that spontaneous emission is a random process explained by vacuum fluctuations. This could explain the very intuitive fact the environment is correctly modeled as having random phase.

Thanks
Bill
 
  • #68
bhobba said:
My point is it is generally assumed, for example, that spontaneous emission is a random process explained by vacuum fluctuations.
Fundamentally, spontaneous emission happens because the initial state is not an eigenstate of the full Hamiltonian (including the interaction term). Perturbativelly, the effect can be calculated in terms of loop diagrams which can be interpreted as "vacuum fluctuations", but I don't think there is anything fundamental about such a picture.
 
  • Like
Likes bhobba
  • #69
bhobba said:
My point is it is generally assumed, for example, that spontaneous emission is a random process explained by vacuum fluctuations. This could explain the very intuitive fact the environment is correctly modeled as having random phase.
I don't think this works the way you intend because your intuition here seems to be rooted in semi-classical thinking where the atom is treated quantum-mechanically but the field is not.

Using this approximation to describe the interaction between a two-level system and a field mode, spontaneous emission looks indeed like a random interruption of a coherent time evolution (Rabi oscillation). But if you use a full quantum description like the Jaynes-Cummings model, the randomness in the time evolution goes away.
 
  • #70
Indeed, the spontaneous emission is the effect of the quantization of the em. field. In the semi-classical picture there's no spontaneous emission, and also the excited states of, e.g., the electron in the hydrogen atom are stable.
 
  • Like
Likes bhobba
  • #71
As other have said spontaneous emission is not explained by normal QM - if its in an eigenstate of energy it should remain so. That's my whole point - one has to go to QED to explain it. My suspicion is that is the rock bottom reason for the randomness we see in things like the phase of photons in decoherence.

Thanks
Bill
 
  • #72
kith said:
spontaneous emission looks indeed like a random interruption of a coherent time evolution (Rabi oscillation). But if you use a full quantum description like the Jaynes-Cummings model, the randomness in the time evolution goes away.

But is there any way to predict when it will spontaneously emit? If that's not the case then we know why photons have random phase.

Thanks
Bill
 
  • #73
bhobba said:
But is there any way to predict when it will spontaneously emit? If that's not the case then we know why photons have random phase.
Maybe I didn't understand completely what the problem is, you think the vacuum fluctuations may solve.

Let me set the stage: when two separate quantum systems interact, we get entanglement and therefore decreased coherence if we look at one system only. Usually, the unitary time evolution of the combined system may lead to increasing entanglement / decreasing coherence in the subsystems, as well as decreasing entanglement / increasing coherence in the subsystems.

In measurements, we don't observe coherences between the possible final states of the system at all, which implies that the interaction with the measurement apparatus is such that the coherence is suppressed (a) strongly and (b) in a long-lasting manner. This can be referred to as approximate decoherence.

Is the problem now how to derive this approximate decoherence or do you want to show that the decoherence is more permanent and that there's no recoherence? Or is it something else?
 
  • #74
kith said:
Is the problem now how to derive this approximate decoherence or do you want to show that the decoherence is more permanent and that there's no recoherence? Or is it something else?

Its to do with some decoherence models requiring a random environment eg:
http://quantum.phys.cmu.edu/CQT/chaps/cqt26.pdf

Now to me its a very obvious reasonable assumption that the phase of the photons (for example as in the above) is random and doesn't require any explanation simply due to the fact the number of disorderly phases is much much greater than orderly ones. I personally wouldn't even count it as a formal assumption - but that's just me - it is an assumption. Now Ruth who has been referred to in this thread thinks it needs explaining - in fact she believes this assumption really assumes what you are trying to show so its circular. I don't believe that - but that's her argument.

My view is there seems to be a natural randomness in photons from QFT due to spontaneous emission - any photon we observe likely has been randomly emitted by spontaneous emission eg:
http://www.famaf.unc.edu.ar/~vmarconi/moderna1/emision_estimulada_AJP.pdf

As the above points out the modern explanation is vacuum fluctuations of the quantum EM field that permeates all space. My understanding of QFT is not as good as I would like it but I do know something of it. The explanation of vacuum fluctuations I have seen is the Heisenberg uncertainty principle - you can't say it has a definite value for the same reason. My suspicion is this is the cause of the randomness. Its inherent and removes any possibility of circularity.

Thanks
Bill
 
  • #75
Where is there a problem? Everyday matter provides such randomness. Just the cosmic microwave background radiation, which is the most accurate realization of black-body radiation ever achieved (literally in the universe ;-)), is sufficient to make objects like the moon behave classically FAPP.

For "quantum research/applications" the opposite is a problem, namely how to avoid decoherence over a sufficiently long time and keep quantum coherence stable long enough!
 
  • #76
vanhees71 said:
Where is there a problem?

Your preaching to the converted. There is no problem. But Ruth will not be dissuaded. I am simply trying to come up with an argument with no holes that can be exploited. She believes even statistical mechanics has this circularity. First I have heard of it - the only issue I have read about is actually proving the ergodic hypothesis.

Thanks
Bill
 
  • #77
bhobba said:
Its to do with some decoherence models requiring a random environment eg:
http://quantum.phys.cmu.edu/CQT/chaps/cqt26.pdf

Now to me its a very obvious reasonable assumption that the phase of the photons (for example as in the above) is random and doesn't require any explanation simply due to the fact the number of disorderly phases is much much greater than orderly ones. I personally wouldn't even count it as a formal assumption - but that's just me - it is an assumption. Now Ruth who has been referred to in this thread thinks it needs explaining - in fact she believes this assumption really assumes what you are trying to show so its circular. I don't believe that - but that's her argument.
I think it depends on what you want to show. Approximate decoherence can be explained by arguments along your lines (namely statistical reasoning which is very similar to Boltzmann's molecular chaos argument). But because of things like Poincarés recurrence theorem, many arguments in the foundations of QM get weakened if decoherence is only approximate. For example, the intuitive picture of splitting worlds whenever decoherence occurs makes less sense if you keep in mind that after a certain time span -which can be really big- recoherence and therefore the merging of worlds occurs.

If Ruth is talking about this, she is correct. If you ask yourself, "how can decoherence in a system be permanent?" and get the answer "because of the random phases of photons which interact with the system" the immediate follow-up question is "how can the phases of photons be permanently random?". After all, the randomness of phases is essentially equivalent to decoherence in the field. So the question about permanent decoherence has been shifted from the system to the field but not answered.

It is really a pattern in these discussions that people are talking past each others because of this.
 
  • Like
Likes bhobba
  • #78
kith said:
It is really a pattern in these discussions that people are talking past each others because of this.

Yes.

I have always said regarding this stuff more research is required.

Thanks
Bill
 
  • #79
bhobba said:
I have always said regarding this stuff more research is required.
I don't think I agree with you here. To me, it looks like fundamental irreversibility is the key issue and I think this question has essentially been settled by statistical mechanics: there is no fundamental irreversibility. The world looks irreversible to us because it started in a special state and we are experiencing it in a coarse-grained way.
 
  • Like
Likes bhobba and vanhees71
  • #80
kith said:
I don't think I agree with you here. To me, it looks like fundamental irreversibility is the key issue and I think this question has essentially been settled by statistical mechanics: there is no fundamental irreversibility. The world looks irreversible to us because it started in a special state and we are experiencing it in a coarse-grained way.

However, that is only true for classical statistical mechanics. If one uses quantum mechanics as the basis for statistical mechanics, it is less clear (unless one is not using Copenhagen, but maybe some version of BM).
 
  • Like
Likes TrickyDicky
  • #81
atyy said:
However, that is only true for classical statistical mechanics. If one uses quantum mechanics as the basis for statistical mechanics, it is less clear (unless one is not using Copenhagen, but maybe some version of BM).
I think Copenhagen fits in because it is about people doing science.
 
  • #82
kith said:
I think Copenhagen fits in because it is about people doing science.

What I mean is that in classical statistical mechanics, irreversibility is not fundamental, because we take Newton's laws as fundamental and statistical mechanics and thermodynamics as emergent. However, in quantum mechanics, in Copenhagen, we need an observer to decide when an irreversible macroscopic mark has occurred. Since the observer is fundamental, irreversibility is fundamental.
 
  • #83
atyy said:
What I mean is that in classical statistical mechanics, irreversibility is not fundamental, because we take Newton's laws as fundamental and statistical mechanics and thermodynamics as emergent. However, in quantum mechanics, in Copenhagen, we need an observer to decide when an irreversible macroscopic mark has occurred. Since the observer is fundamental, irreversibility is fundamental.
The "irreversible macroscopic mark" is left in a classical system, so I don't think it is fundamentally irreversible. Sure, Copenhagen includes measurements as key elements but whether a measurement has taken place is a matter of practice and not of principle. As you say, it is a decision which the scientist makes. I think it is misleading to call this "fundamental irreversibility".
 
  • Like
Likes bhobba
  • #84
kith said:
To me, it looks like fundamental irreversibility is the key issue and I think this question has essentially been settled by statistical mechanics: there is no fundamental irreversibility. The world looks irreversible to us because it started in a special state and we are experiencing it in a coarse-grained way.
Isn't this arguing against your #77? Since when is irreversibility not fundamental? Last I checked the second law was still alive and well, both in cosmology/GR and QFT. This has never been settled by statistical mechanics to my knowledge. A special initial state IS a way to define irreversibility as fundamental.
 
  • #85
kith said:
The "irreversible macroscopic mark" is left in a classical system, so I don't think it is fundamentally irreversible. Sure, Copenhagen includes measurements as key elements but whether a measurement has taken place is a matter of practice and not of principle. As you say, it is a decision which the scientist makes. I think it is misleading to call this "fundamental irreversibility".

But the classical world in Copenhagen is not fully lawed - in particular, it is not fully lawed by Newton's laws or classical relativity, which are falsified by quantum mechanics. So in Copenhagen the decision a scientist makes is fundamental. For the observer to be not fundamental, one needs an interpretation in which the observer is not fundamental, ie. BM or MWI.
 
  • #86
atyy said:
Yes, that's among the papers I know about. I have tried to read almost all your papers with great interest! I guess I'm not enough of an expert to evaluate its correctness by myself, and I don't know if there is consensus about whether it really works, at least not the way Bohmian Mechanics for non-relativistic quantum mechanics has been examined for all sorts of tricky situations, and really does seem to work. Would it be fair to say that this is still pretty much at the frontier of research, rather than textbook knowledge? I have the same reservations about MWI - is it really an alternative interpretation to Copenhagen - or is it still an approach that it is unclear whether all the problems have really been worked out?

So would it be fair to say that at the consensus level - eg., what one can teach to undergraduates - Copenhagen is still the only interpretation of quantum mechanics?

(Consistent histories, maybe - but it essentially has collapse and all the same problems as Copenhagen, just declared not to be problems)

Well I understand "elegance" and I understand "mathematical consistency" but this is the first time I've come across "being able to teach it to undergraduates" as a criterion for accepting an interpretation :)

Consistent histories can, I think, be formulated without collapse. it then becomes a many histories theory, which only needs a small dash on ontology to turn it into a Tegmarkian world.
 
  • #87
Derek Potter said:
Well I understand "elegance" and I understand "mathematical consistency" but this is the first time I've come across "being able to teach it to undergraduates" as a criterion for accepting an interpretation :)

Consistent histories can, I think, be formulated without collapse. it then becomes a many histories theory, which only needs a small dash on ontology to turn it into a Tegmarkian world.

No, what I said was that being unquestionably right was a criterion for teaching it to undergraduates.
 
  • #88
atyy said:
But the classical world in Copenhagen is not fully lawed - in particular, it is not fully lawed by Newton's laws or classical relativity, which are falsified by quantum mechanics. So in Copenhagen the decision a scientist makes is fundamental. For the observer to be not fundamental, one needs an interpretation in which the observer is not fundamental, ie. BM or MWI.
The decision a certain observer makes is not fundamental because another observer can make a different decision. If different observers disagree whether a measurement has been performed, they also disagree about whether a process is irreversible. So what Copenhagen needs is the observer and his subjective notion of irreversibility, which is the second kind in my post #79.
 
  • #89
kith said:
The decision a certain observer makes is not fundamental because another observer can make a different decision. If different observers disagree whether a measurement has been performed, they also disagree about whether a process is irreversible. So what Copenhagen needs is the observer and his subjective notion of irreversibility, which is the second kind in my post #79.

Yes, I agree. But there is no fundamental reversibility either.
 
  • #90
atyy said:
Yes, I agree. But there is no fundamental reversibility either.
Yes, in the sense that Copenhagen doesn't try to remove the observer and his subjective notions.
 
  • #91
What is unclear concerning quantum statistics? The H theorem is most naturally derived from detailed balance which follows from the unitarity of the S matrix, i.e., the (generalized) optical theorem, which is at the heart of quantum-many body theory. Ironically, it's much harder to do classical than quantum statistical physics. Even if you try to do everything in terms of classical theory, you need to introduce quantum ideas to make everything clear. Although thermodynamics and statistical physics survived the quantum revolution best, many "clouds on the horizon of classical physics" were solved by the discovery of quantum physics and triggered its development. One must not forget that quantum theory started with Planck's solution of the black-body radiation problem, a typical statistical-physics problem, and Einstein's idea about "wave-particle duality" (although obsolete now) came from his analysis of this solution.
 
  • #92
vanhees71 said:
What is unclear concerning quantum statistics?
There isn't something unclear about what you probably would call the physics. The last couple of posts were just concerned with how the Copenhagen interpretation fits in with what I wrote in post #79. This could be a starting point for another fundamental discussion about interpretations but I don't want to lead such a discussion right now.
 
  • #93
vanhees71 said:
The H theorem is most naturally derived from detailed balance which follows from the unitarity of the S matrix, i.e., the (generalized) optical theorem, which is at the heart of quantum-many body theory.
That sounds like the Weinberg's proof of the H-theorem, in
S. Weinberg, The Quantum Theory of Fields vol I, Sec. 3.6, pages 150-151.
Can you explain why at the left hand side of Eq. (3.6.19) we have dt and not d(-t)? The sign of t should not matter in a T-invariant theory. On the other hand, with d(-t) in Eq. (3.6.19) we would eventually "derive" that entropy decreases with time, contrary to what we wanted to obtain.

My point is, you cannot really derive the H-theorem without assuming some form on time asymmetry from the beginning.
 
  • #94
atyy said:
No, what I said was that being unquestionably right was a criterion for teaching it to undergraduates.
Well, it was a light-hearted comment but if you want to be serious about it, perhaps you can say which version of CI you regard as unquestionably right. CI tends to be an umbrella for all sorts of interpretations including Heisenberg fuzziness which inspired Schrodinger's Cat. However as far as I know, CI always has some sort of randomness built into it, whether as a projection postulate or a slightly simpler wavefunction collapse. MW manages without any such thing. So it would seem that CI actually has redundant hypotheses making it pretty unlikely to be right at all, let alone unquestionably so.
 
  • #95
Demystifier said:
That sounds like the Weinberg's proof of the H-theorem, in
S. Weinberg, The Quantum Theory of Fields vol I, Sec. 3.6, pages 150-151.
Can you explain why at the left hand side of Eq. (3.6.19) we have dt and not d(-t)? The sign of t should not matter in a T-invariant theory. On the other hand, with d(-t) in Eq. (3.6.19) we would eventually "derive" that entropy decreases with time, contrary to what we wanted to obtain.

My point is, you cannot really derive the H-theorem without assuming some form on time asymmetry from the beginning.

This is precisely the one and only correct proof of the detailed-balance relation for the most general case. You don't need time-reversal or parity invariance at all. I also don't understand your question concerning dt vs d(-t), because there's no time integral involved in (3.6.19). You just integrate (3.6.19) over ##\mathrm{d} \alpha##. Then both integrals are equal, and thus ##\int \mathrm{d} \alpha P_{\alpha}## time-independent.

However, your final statement is correct: Of course in deriving transition-matrix elements you assume a directnesses of time, the socalled "causality time arrow". The H theorem just proves that this "causality time arrow" is the same as the "thermodynamical time arrow", defined as the direction of time, where entropy doesn't decrease.

Of course, a system in thermal equilibrium doesn't admit the determination of any time direction, because it forgot any history. In other words, if you make a movie from a equilibrium system, you cannot tell whether you show it forward or backward running, as long as you look at the macroscopic state only, i.e., averaged/coarsegrained quantities.
 
  • Like
Likes Demystifier
  • #96
uumlau said:
Note that EPR, Bell's inequality and entanglement don't demonstrate "nonlocality" (though this is the common word for it) so much as it confirms the initial "superposition of states" as predicted by quantum mechanics. In other words, the initial state of the photons are not polarized in a particular direction, the initial spin of the fermions are not in some specific x-y-z direction. The "nonlocalitiy" has to do with those states being 100% correlated antisymmetrically, as required by standard quantum mechanics.

Like others in this thread, I'm not seeing anything that looks like "proof of wave function collapse". It's called "proof of existing quantum theory." There is an unfortunate tendency in physics to conceive of the math as being the reality. The math is the description of the reality, the quantitative language we use to communicate about the reality, subject to experimental verification.

Or to use an analogy from the Matrix, the quote of "There is no spoon." There is no wavefunction. There are phenomena that we measure that are described by math we call "wavefunctions", which aptly predict our measurements. The notion that you can "prove" that a mathematical construct has objective material behavior (collapsing or otherwise) is absurd.
 
  • #97
The wave function collapse is a value exchange - a numerical event!
 
  • #98
The OP is long gone and now the discussion is just going in circles. Thread closed.
 
  • Like
Likes vanhees71

Similar threads

Back
Top