Density Matrix and State Alpha

In summary: The alpha state of spin up and down is a state that is an equal superposition of spin-up and spin-down, so that a measurement of spin along the vertical axis has an equal probability of yielding either result.
  • #1
jlcd
274
7
There is something that I don't quite understand or want clarification. See John Wheeler article "100 years of the quantum"

http://arxiv.org/pdf/quant-ph/0101077v1.pdf

refer to page 6 with parts of the quotes read

"so if we could measure whether the card was in the alpha
or beta-states, we would get a random outcome. In
contrast, if we put the card in the state “face up”, it
would stay “face up” in spite of decoherence. Decoherence
therefore provides what Zurek has termed a “predictability
sieve”, selecting out those states that display
some permanence and in terms of which physics has predictive
power."

but note that the state alpha and beta still exist. what if we would get random outcome (and what does this mean). Is it saying that since it's random outcome.. it doesn't occur in our world.. but somehow it still exist??
My math is very high school so please mention only those in the article above and not letting me read a treatise about density matrix that would confuse me more. Just want to understand this thing about alpha state (superposition of up and down). In the Schrodinger Cat analogy. Alpha is that Cat dead plus Cat alive. The density matrix would still produce it but only random outcome? Again does it mean we just don't perceive it but it's still there (let's not use Copenhagen where they made the ad hoc classical cut or sweeping it under the rug.. let's use the all quantum formalism where all system, environment, measuring device are all quantum). Thanks.
 
Physics news on Phys.org
  • #2
I think that you will win nothing to think about macroscopic cats! begin with spins.
There is no more mistery in |0> + |1> than in |0> or in |1>.
You can translate in density matrix language by saying that
There no more mistery in (|0> + |1>) (<0| + <1|) than in |0><0| or |1><1|
You have pure states. And you will have a perfectly known output for each of them if you choose the good direction for tha Stern Gerlach apparatus.
 
Last edited:
  • #3
Really? so replacing the face up & down with spin up and down in the following

"so if we could measure whether the electron was in the alpha or beta-states, we would get a random outcome. In contrast, if we put the card in the state “spin up”, it
would stay “spin up” in spite of decoherence. Decoherence therefore provides what Zurek has termed a “predictability sieve”, selecting out those states that display
some permanence and in terms of which physics has predictive power."

Does this mean the alpha state of spin up and down is a legal quantum state that we just can't detect with our sensors but is still there for all intent and purposes? Is this what the article meant? any bonafide Science Advisor can expand on it? Thanks.
 
  • #4
jlcd said:
Does this mean the alpha state of spin up and down is a legal quantum state

By "the alpha state of spin up and down" do you mean the state that is an equal superposition of spin-up and spin-down, so that a measurement of spin along the vertical axis has an equal probability of yielding either result? If so, that is a perfectly sensible quantum state - it's the state in which a spin measurement along the horizontal axis has a 100% probability of finding spin-up, and it's the state the particles in one output beam of a horizontally oriented Stern-Gerlach device will be in.

As the above suggests, we can write the wave function in many different forms, some of which are superpositions and some of which are not. This is analogous to the way that I can describe a vector pointing to the northwest as either "A vector pointing to the northwest" or "a superposition (sum) of a vector pointing north and a vector pointing west"; or I could write an ostensibly unsuperimposed north-pointing vector as either "a vector pointing north" or "a superposition (sum) of a vector pointing northwest and a vector pointing northeast".

If we're going to measure the spin on a vertical axis, we should write the wave function as a superposition of spin-up and spin-down; if we're going to measure the spin on a horizontal axis we should write the wave function as a superposition of spin-left and spin-right. In that form it's particularly easy to calculate the probability of getting either possible outcome; but in any case after the measurement the particle will be in the state corresponding to a 100% probability of getting whatever you measured.

(By the way - if your math is "very high school" you might want to mark your threads B instead of I to say that you don't want answers that assume you've had a few years of college-level math. I've reset the level in this thread).
 
  • Like
Likes bhobba
  • #5
Let me go back to the original article as Naima's spins didn't directly answer my questions. Please refer to John Wheeler http://arxiv.org/abs/quant-ph/0101077

"The second unanswered question in the Everett picture was more subtle but equally important: what physical mechanism picks out the classical states — face up and face down for the card — as special? The problem was that from a mathematical point of view, quantum states like "face up plus face down" (let’s call this "state alpha") or "face up minus face down" ("state beta", say) are just as valid as the classical states "face up" or "face down".

So just as our fallen card in state alpha can collapse into the face up or face down states, a card that is definitely face up — which equals (alpha + beta)/2 — should be able to collapse back into the alpha or beta states, or any of an infinity of other states into which "face up" can be decomposed. Why don’t we see this happen?

Decoherence answered this question as well. The calculations showed that classical states could be defined and identified as simply those states that were most robust against decoherence. In other words, decoherence does more than just make off-diagonal matrix elements go away. If fact, if the alpha and beta states of our card were taken as the fundamental basis, the density matrix for our fallen card would be diagonal to start with, of the simple form

density matrix = [1 0]
--------------------[0 0]

since the card is definitely in state alpha. However, decoherence would almost instantaneously change the state to

density matrix = [1/2 0]
--------------------[0 1/2]

so if we could measure whether the card was in the alpha or beta-states, we would get a random outcome. In contrast, if we put the card in the state "face up", it would stay "face up" in spite of decoherence. Decoherence therefore provides what Zurek has termed a "predictability sieve", selecting out those states that display some permanence and in terms of which physics has predictive power."

Does the above mean that if the result is random outcome, it doesn't tally with the classical word so is not available to classical world apparatus yet they still somehow exist (the alpha state of face up and face down)?
 
  • #6
jlcd said:
Does the above mean that if the result is random outcome, it doesn't tally with the classical word so is not available to classical world apparatus yet they still somehow exist (the alpha state of face up and face down)?

It means that if you were to prepare a card in the state alpha (a superposition of face-up and face-down), it would very quickly (immediately, for all practical purposes) evolve to either a face-up state or a face-down state. Continued evolution from there will leave it either face-up or face-down; it will not return to state alpha (or any other state in which it is not unambiguously face-up or face-down according to how it evolved).

This behavior follows from the fact that the card is made up of a very large (maybe 1024) number of individual particles - that's what makes it subject to decoherence and makes it behave as a "classical" object. The behavior of a single particle in superposition is quite different.
 
  • Like
Likes jlcd and bhobba
  • #7
Nugatory said:
It means that if you were to prepare a card in the state alpha (a superposition of face-up and face-down), it would very quickly (immediately, for all practical purposes) evolve to either a face-up state or a face-down state. Continued evolution from there will leave it either face-up or face-down; it will not return to state alpha (or any other state in which it is not unambiguously face-up or face-down according to how it evolved).

This behavior follows from the fact that the card is made up of a very large (maybe 1024) number of individual particles - that's what makes it subject to decoherence and makes it behave as a "classical" object. The behavior of a single particle in superposition is quite different.

But the above is assuming a classical divide or classical bias.. but in a pure quantum world.. both face up and alpha state of face up and face down is valid state. The density matrix is just a tool about probability, a classical device. So how could it choose the preferred basis. Are you familiar with the factorization problem? Do you agree there is such a problem? Demystifier described it thus: "Factorization should be a big problem only for those who take MWI very seriously (even if they do not realize it). But for all the others factorization is not really a problem, because in other interpretations of QM one can always identify a "natural" factorization.. .It's a problem if you think you can use any factorization. But in other interpretations you don't use any factorization. You use the "natural" factorization, which is essentially unique. "

Can anyone correct me if this is the essence of the factorization problem? Those who sweep under the rug are using classical cut or bias to prefer a natural factorization.
 
  • #8
jlcd said:
But the above is assuming a classical divide or classical bias.. but in a pure quantum world.. both face up and alpha state of face up and face down is valid state.
There's no assumed classical/quantum divide there. The behavior I'm describing is quantum-mechanical all the way; the classical behavior of the playing card is the result of the combining the individual quantum-mechanical behavior of each of the enormous number of particles that make up a playing card.

You've already said that you don't want the math... So you'll have to take the word of people who have done the math when they say that if you do a purely quantum-mechanical calculation of the evolution of a superimposed system of many particles, the superposition will very rapidly evolve into states that behave classically.

If you can get hold of David Lindley's book "Where does the weirdness go?", it's an excellent non-technical explanation of how decoherence works and how quantum mechanics predicts that superposition effects become less and less noticeable as the number of particles in the system increases.
 
  • #9
Nugatory said:
There's no assumed classical/quantum divide there. The behavior I'm describing is quantum-mechanical all the way; the classical behavior of the playing card is the result of the combining the individual quantum-mechanical behavior of each of the enormous number of particles that make up a playing card.

You've already said that you don't want the math... So you'll have to take the word of people who have done the math when they say that if you do a purely quantum-mechanical calculation of the evolution of a superimposed system of many particles, the superposition will very rapidly evolve into states that behave classically.

If you can get hold of David Lindley's book "Where does the weirdness go?", it's an excellent non-technical explanation of how decoherence works and how quantum mechanics predicts that superposition effects become less and less noticeable as the number of particles in the system increases.

I understand that part. I was asking about the factorization problem which I read in a thread a month ago and still getting my head around it. What is your comment about Demystifier summary of it which says:

" In a paper entitled
"Nothing happens in the Universe of the Everett Interpretation":
http://arxiv.org/abs/1210.8447
Jan-Markus Schwindt has presented an impressive argument against the many-world interpretation of quantum mechanics.

The argument he presents is not new, but, in my opinion, nobody ever presented this argument so clearly.

In a nutshell, the argument is this:
To define separate worlds of MWI, one needs a preferred basis, which is an old well-known problem of MWI. In modern literature, one often finds the claim that the basis problem is solved by decoherence. What J-M Schwindt points out is that decoherence is not enough. Namely, decoherence solves the basis problem only if it is already known how to split the system into subsystems (typically, the measured system and the environment). But if the state in the Hilbert space is all what exists, then such a split is not unique. Therefore, MWI claiming that state in the Hilbert space is all what exists cannot resolve the basis problem, and thus cannot define separate worlds. Period! One needs some additional structure not present in the states of the Hilbert space themselves.

As reasonable possibilities for the additional structure, he mentions observers of the Copenhagen interpretation, particles of the Bohmian interpretation, and the possibility that quantum mechanics is not fundamental at all."
 
  • #10
Nugatory said:
It means that if you were to prepare a card in the state alpha (a superposition of face-up and face-down), it would very quickly (immediately, for all practical purposes) evolve to either a face-up state or a face-down state. Continued evolution from there will leave it either face-up or face-down; it will not return to state alpha (or any other state in which it is not unambiguously face-up or face-down according to how it evolved).

This behavior follows from the fact that the card is made up of a very large (maybe 1024) number of individual particles - that's what makes it subject to decoherence and makes it behave as a "classical" object. The behavior of a single particle in superposition is quite different.

Based on reading Demystifier references and others. Just because something is composed of very large (maybe 1024) number of individual particles doesn't mean it is subject to decoherence. Remember in many worlds it is pure unitary, there is no decoherence without collapse first. So you must first assume there is collapse. Without it.. the card won't even be a card. Is this reasoning correct? Also the density matrix is just not a tool for probability.. it has built in collapse to make the off diagonal terms (interference) very small as I read in a paper. Do you agree with everything I described here? Just relating what I read. Thanks.
 
  • #11
jlcd said:
there is no decoherence without collapse first.
Wherever you got that idea from, it's wrong. Decoherence is the result of unitary and collapse-free evolution from the initial state.
 
  • #12
Nugatory said:
Wherever you got that idea from, it's wrong. Decoherence is the result of unitary and collapse-free evolution from the initial state.

No. I studied this several nights and weeks. This is the misconception that confused even the mainstream experts. It's very subtle. But many experts have already agreed to it like Demystifier, Peterdonis, etc. Please read the following in details (why don't you agree with it if you don't?):

http://transactionalinterpretation....ally-split-in-the-many-worlds-interpretation/

http://philsci-archive.pitt.edu/10757/1/Einselection_and_HThm_Final.pdf

"The goal of decoherence is to obtain vanishing of the off-diagonal terms, which corresponds to
the vanishing of interference and the selection of the observable R as the one with respect to
which the universe purportedly ‘splits’ in an Everettian account. As observed by Bub, since the
resulting mixed state is an improper one, it does not license the interpretation of the system’s
density matrix as representing the determinacy of outcome perceived by observers -- but that is a
separate issue. The aim of this paper is to point out that the vanishing of the off-diagonal terms is
crucially dependent on an assumption that makes the derivation circular.

As is apparent from (5), the off-diagonal terms are periodic functions that oscillate in
value as a function of time. However, as Bub notes, z(t) will have a very small absolute value,
providing for very fast vanishing of the off-diagonal elements and a very long recurrence time
for recoherence when n is large, based on the assumption that the initial states of the n
environmental subsystems and their associated coupling constants are random
. But the
randomness appealed to here is not licensed by the Everettian program, which states that the
quantum state of the universe is that of a closed system that evolves only unitarily. The
‘randomness’ of the environmental systems does not arise from within the Everettian picture.
When one forecloses that assumption, the decoherence argument fails – and with it,
‘einselection,’ which depends on essentially the same argument to obtain a preferred
macroscopic observable for ‘pointers.’

"
 
  • #13
jlcd said:
It's very subtle. But many experts have already agreed to it like Demystifier, Peterdonis, etc. Please read the following in details (why don't you agree with it if you don't?):
[much quoted text...

I agree with what you've quoted above, but I see nothing there that supports your claim that "there is no decoherence without collapse first."
 
  • #14
Nugatory said:
I agree with what you've quoted above, but I see nothing there that supports your claim that "there is no decoherence without collapse first."

Well. Mathematically, decoherence is the decay of the off-diagonal elements of the system density matrix in a specific basis. Now the paper is saying that "The crucial point that does not yet seem to have been fully appreciated is this: in the
Everettian picture, everything is always coherently entangled, so pure states must be viewed as a
fiction -- but that means that it is also fiction that the putative 'environmental systems' are all
randomly phased
. In helping themselves to this phase randomness, Everettian decoherentists
have effectively assumed what they are trying to prove: macroscopic classicality only ‘emerges’
in this picture because a classical, non-quantum-correlated environment was illegitimately put in
by hand from the beginning. Without that unjustified presupposition, there would be no
vanishing of the off-diagonal terms and therefore no apparent diagonalization of the system’s
reduced density matrix that could support even an approximate, ‘FAPP’ mixed state
interpretation."

Therefore, without collapse to randomize the phases where initially "everything is always coherently entangled", there is no decay of the off-diagonal elements of the system density matrix hence no decoherence.
 
  • #15
Cannot we say that when we prepare a schrodiger cat we have measured it?
 
  • #16
Nugatory:

"in the Everettian picture
..."

You said in the other thread closed "Note the text that I have marked in boldface. The paper you are quoting from is talking about the Everettian picture has with decoherence; it doesn't have anything to do with the way that decoherence appears as a consequence of unitary evolution and it most certainly is not saying what you're claiming it does."

You didn't say this earlier here so I don't know. So it means depending on the picture, it is either.. that is..

In Copenhagen (and some others), Decoherence occurs first then Collapse
In Everettian, Collapase occurs first then Decoherence?

It's just a matter of semantics how they are presented.. and they are both right? Do all others like Demystifier agree to this?

This is the distinction I'm inquiring. Thank you.
 
  • #17
In a thread ongoing thread "What if there are no Schrödinger's cats?" Bhobba shared in post # 22 the following old thread https://www.physicsforums.com/threads/what-if-there-are-no-schroedingers-cats.850935/page-2 for review. And I read he and atyy were discussing about improper mix state. Atyy argued improper mix state was a superposition, Bhobba said it was not. Then wrote this "
Its an improper mixed state - its requires an extra interpretive assumption to be proper. I am in no way hiding that." and atty said "Yes, that's really what I mean when I say it is a superposition. It is because the whole system remains in pure state that the mixture of the reduced density matrix isn't proper without an additional interpretive assumption (such as something like collapse or hidden variables)."

I'm trying to connect it with what kastner wanted to do. It seems she wanted to make the improper mixture into proper by her version of collapse.. but what I can't understand is why physicists like Bhobba opposed to her version and Nugatory even wanted to call it misinformation. It's very subtle and all this quite complicated. Decoherence arguments are getting so sophisticated even to physicists so we starters have to grasp even harder.

In Everettian, there is really no collapse.. yet Kastner said there was collapse. So even MWI has so many versions. This can really confuse anyone. Maybe someone can share the versions of improper vs proper and collapse vs uncollapse in all versions, Copenhagen, MWI, Bohmians as a guide for us who are perflexed.

About the book Where the Weirdness go. I've been reading it as Nugatory suggested. But it's quite incomplete.. it focused on Copenhagen only, without the more modern quantum Darwinism, a reviewer at amazon mentioned this "
The moral of all these difficulties seems to be - even with all the progress that has been made and Lindley's own rather "rosy" outlook in his book - that decoherence has NOT solved the deepest puzzles of QM. Lindley's book sub-title is "...But Not As Strange As You Think". Alas, it appears the quantum world is actually STRANGER to our familiar macroscopic intuitions than we might hope..."

If anyone familiar with Kastner. How did her version further messed up the already confusing state? If the environment is not random and all hilbert space all unitary, couldn't there be decoherence? Is this separate or related to the context of improper mixed state by the Bhobba camp. By Atyy reasoning, if the environment is random, can we even call it improper mixed state or proper mixed state since there was no superposition (as per Kastner) to begin with? And how to relate these two versions. Anyone can enlighten us?
 
  • #18
The experts (Bhobba, Atyy) are tired of this subject or debate. The not so experts are not familiar with it or confused. This leaves us newbies without much input.. you can't find any layman books on the details of this. So let me give some statement for the comment of those semi experts or experts who are aware of this.

The problem of outcome is going from improper mixed state to proper mixed state. That is a convensional problem.

My question is how to connect this with Kastner papers. Is it simply about going from improper mixed state to proper mixed state. But her arguments are so different.. or is it just semantics. Her arguments being that in pure unitary Hilbert space, there is no decoherence. The conversional puzzle is the problem of improper mixed state to proper mixed state. The MWI camp says the mixed states makes up separate branches. But Demystifier or others emphasized this convensional MWI is really Bohmian MWI because of he classical branches already preferred. So is Kastner MWI the pure MWI. Note Copenhagen and Bohmians have classical preferred basis so the arguments is more of the pure quantum arguments (perhaps quantum Darwinism?). I wrote to summarize what I learned in this very subtle complex idea to organize my mind about it.

In convensional decoherence. it's about system A entangling with the environment.. and measuring a subsystem would produce decoherence (improper mixed state). Does this assume the entire universe must be in pure state or just the system and environment in convensional decoherence. Or does is environment the universe itself? My question is why is Kastner stuff so disbelieved by many.. like Bhobba who stated the environment is naturally random and Bhobba claiming all assumed this except Kastner. But if the environment is naturally random.. it's already a collapse state.. what caused the initial collapse? isn't this a valid question? Anyone can comment on all this?
 
  • #19
Nugatory said:
There's no assumed classical/quantum divide there. The behavior I'm describing is quantum-mechanical all the way; the classical behavior of the playing card is the result of the combining the individual quantum-mechanical behavior of each of the enormous number of particles that make up a playing card.

You've already said that you don't want the math... So you'll have to take the word of people who have done the math when they say that if you do a purely quantum-mechanical calculation of the evolution of a superimposed system of many particles, the superposition will very rapidly evolve into states that behave classically.

If you can get hold of David Lindley's book "Where does the weirdness go?", it's an excellent non-technical explanation of how decoherence works and how quantum mechanics predicts that superposition effects become less and less noticeable as the number of particles in the system increases.

I finally started to read seriously the above book suggested by Nugatory. It's quite interesting. Some, I think atyy (I'll search) says Copenhagen and Decoherence are separate and incompatible. But I read the following part in the book: "Decoherence therefore completes the Copenhagen interpretation of quantum mechanics, by making the process of measurement a real physical phenomenon rather than a decree from Bohr. Niels Bohr knew that measurements in fact took place in physics laboratories all around the world, but he couldn't explain how, and so simply declared that they happened. This was always a genuine flaw with his interpretation, but decoherence resolves it." I read this passage at random which piqued my interests. I'll start reading the book from start to end. And return to Density Matrix with Nugatory after it.
 
  • #20
When you read that decoherence completes the copenhagen, you have to understand that it adds something. Do not think it gives you a complete theory. it does not solve the problem of the outputs.
 
  • #21
Naima. Are you familiar with Quantum Darwinism? Here the universe starts as unitary. Does it accept that random occurrence like beta decay just occurs randomly? I'm trying to differentiate it to Kastner decoherence idea that if the universe starts unitary, there should be no phase decoherence and even beta decay should not occur. I'm comparing these two to the standard Decoherence idea of measuring a subsystem as mixed (proper or improper) state out of the pure system plus environment. So there are many meanings of decoherence? what can you state about this?
 
  • #22
my interest goes to things which can be calculated or measured.
Decoherence belongs to them.
Are you talking about that kind of things?
 
Last edited:
  • #23
naima said:
my interest goes to things which can be calculated or measured.
Decoherence belongs to them.
Are you talking about that kind of things?

I've been thinking a lot of this lately comparing different references and sources. Can it be proven that in pure state or pure superposition in Hilbert space, decoherence can occur by focusing on a subsystem (ignoring system B for example). Is this sufficient or do you have to force collapse to occur... perhaps the density matrix has built in collapser (making the off diagonal terms near zero) or is it by merely measuring a subsystem, there is automatic dephasing of the waves? can you or anyone prove this is true in any actual experiments or java illustration of the waves joining, phasing and dephasing depending on where you measure it.
 
  • #24
Decoherence is above all a measureable phenomenon in the quantum dynamics of open systems. If a small quantum system is coupled to a considerably larger environment, the experimentally observed behaviour (like the decay of off-diagonal elements of the system density matrix) can under certain conditions be described by the Lindblad equation. Unlike the von Neumann equation (which is the equivalent to the Schrödinger equation for density matrices) the Lindblad equation is non-unitary. However, it can be derived from the unitary dynamics of the whole system including the environment by making a number of approximations.

Trying to explain collapse by decoherence is an application of this physically well-founded theory. My opinion is that many objections against this have essential analogues in classical statistical mechanics (Boltzmann's H-theorem vs. Loschmitz's paradox), so at least for them, the degree of mystery should be similar after a careful analysis and adaption of your intuition. For me, the most distinguishing feature between classical mechanics and QM with respect to decoherence is that the entropy of subsystems can be bigger than the entropy of the whole system in QM.

What I would like to see is a treatment of a simple EPR-type experiment using the language of open quantum systems. I am unsure whether to expect something qualitatively different there or not. I tend to think not but there are many subtleties because our everyday language cares a lot about time, space and separability of objects while the mathematics of entanglement doesn't.

Also there are objections against dismissing collapse which are unrelated to physical decoherence mechanisms, like Schwindt's paper against many worlds.

Personally, I also like the idea of reversing the point of view. If we can't get rid of a certain amount of collapse in QM, maybe this is a hint that certain preconceived notions in talking about classical mechanics shouldn't be seen as central to the theory but merely as useful tools for mental pictures.
 
Last edited:
  • #25
I've been reading decoherence papers and this thread over a dozen times to get hold of the essence of it.. and I think I'm getting it..
A question.

If the off diagonal terms of the density matrix vanish and there are no more interferences and yet there is no collapse (and no Born rule applied as well).. would you have any classical observable? I read about this regarding the density matrix in Maximililan "The Quantum-to-classical Transition and Decoherence" and he mentioned: "The predictively relevant part of the decoherence theory relies on reduced density matrices, whose formalism and interpretation presume the collapse postulate and Born's rule."
 

1. What is a density matrix?

A density matrix, also known as a density operator, is a mathematical representation of a quantum system that describes the probabilities of the system being in different quantum states. It contains information about the quantum states of a system and how they evolve over time.

2. How is the density matrix different from a pure state?

A pure state is a state in which the quantum system is in a single, definite state. In contrast, a density matrix represents a mixed state, where the system is in a superposition of multiple states. A pure state can be represented by a density matrix with only one non-zero element, while a mixed state will have multiple non-zero elements.

3. What is the significance of the trace of a density matrix?

The trace of a density matrix is equal to the expectation value of the system's energy. It is a measure of the average energy of the system and can be used to calculate other important quantities such as entropy and purity. The trace is also invariant under unitary transformations, making it a useful tool in quantum mechanics.

4. How is the state alpha represented in a density matrix?

The state alpha is represented as a column vector in a density matrix, with the coefficients of each quantum state corresponding to the probability amplitudes of that state. The square of these coefficients gives the probability of the system being in that state.

5. How does the density matrix evolve over time?

The evolution of a density matrix is described by the Schrödinger equation, which takes into account both the Hamiltonian of the system and the initial state. As time passes, the density matrix will change to reflect the changing probabilities of the system being in different states. This allows us to make predictions about the behavior of quantum systems.

Similar threads

  • Quantum Physics
Replies
31
Views
2K
Replies
8
Views
767
Replies
16
Views
1K
Replies
5
Views
1K
Replies
0
Views
617
Replies
3
Views
798
Replies
9
Views
1K
Replies
7
Views
1K
Replies
1
Views
1K
  • Quantum Interpretations and Foundations
Replies
32
Views
2K
Back
Top