What is really that density matrix in QM?

  • I
  • Thread starter Jamister
  • Start date
  • #26
vanhees71
Science Advisor
Insights Author
Gold Member
2019 Award
15,953
7,257
Well, you can define all kinds of entropy. It depends on the context which entropy you use. Here you use it to a different question and thus defining it relative to a different notion of "complete knowledge".

Here you take an observable represented by the operator
$$\hat{K}=\sum_k k |\langle k \rangle \langle k |.$$
Your random experiment here is the measurement of the observable ##K## on a system prepared in a pure state represented by ##\hat{\rho}=|\psi \rangle \langle \psi|##. This defines the probabilities for finding a given value ##k## as
$$p_k=|\langle k|\psi \rangle|^2.$$
Thus you define for this random experiment complete knowledge to mean knowledge of the value ##k##. That's why in general the entropy for this random experiment is
$$\tilde{S}_k=-\sum_k p_k \ln p_k,$$
which is indeed ##0## if and only if only one ##p_k=1## and all others 0, i.e., if the state is prepared in the pure state ##\hat{\rho}_k=|k \rangle \langle k|##. Thus the entropy principle is applied correctly to this random experiment.

It's of course not the von Neumann entropy, which defines another entropy for another "random experiment". It does not ask for the specific value of a specific observable (or a set of observables) but it asks the question of "state determination", i.e., it considers all possible quantum states ##\hat{\rho}## (i.e., pure and mixed).
Here the question is what are the states of complete knowledge of the system as a whole, not specified knowledge about the value of an observable. It's important to get this clear, because that's what really distinguishes the notion of state in the classical sense (complete knowledge means to know the always determined values of all possible observables) and of state in the quantum sense, where complete knowledge means that the system is prepared in a pure state.

This is equivalent to have determined values for some complete set of compatible observables. That's what makes QT really distinct from classical physics: The determination of the values of two observables can be impossible, i.e., in general one cannot prepare the stystem in a way that both observable take determined values. Thus one has to define the compatibility of observables: Two observables ##A## and ##B## are compatible iff you can, for any value ##A## can take (one of the eigenvalues ##a## of the operator ##\hat{A}## representing the observable) and any value ##B## can take, there's at least one state, where ##A## and ##B## take with certainty the given possible values ##a## and ##b##.

It's much simpler to state the math, but it follows from the above physical definition: Observables are compatible if there's a complete set of common eigenvectors of their corresponding operators, which is equivalent to that these operators commute.

A complete set of compatible observables is one that determines their common eigenvectors uniquely (up to a phase factor of course).

That's why von Neumann entropy is the right "measure of missing information" (I found the above state as a measure for the "surprise" in getting some result from a random experiment also a nice metaphor to get an intuitive idea about entropy as information measure) in this question of what's the most complete information you can have for a quantum system in general. It turns out that these are the pure states, corresponding to the determination of the values of a complete set of compatible observables when the system is prepared in such a state.

The above example is of course not an exception. It just specifies in more detail, what you are interested in, namely in a given specific observable. If the spectrum is non-degenerate it's a complete set and saying to have prepared the system such that the observable takes a definite value is equivalent to preparing it in the corresponding pure state (i.e., the stat. op. is the projector defined by the eigenvector to this eigenvalue).

If the spectrum is degenerate, just knowing that the observable takes a determined value ##a##, is incomplete knowledge. There are then many states describing the situation, and the question is, what's the best guess for it. Here, the maximum-entropy principle helps: One assigns that state operator to the situation that takes into account the information you have, but no further bias or prejudice.

Now there's an orthonormal set ##|a,\alpha \rangle## of eigenvectors to the eigenvalue ##a## of the operator ##\hat{A}##. Since it's known that the value of ##A## is with certainty ##a##, the probability for finding any other value ##a' \neq a## must vanish, i.e., ##\langle a',\alpha|\hat{\rho}|a',\alpha =0##. Since this is valid for any linear combination of vectors spanned by the ##|a',\alpha \rangle## with ##a'\neq a##, the statistical operator must be of the form
$$\hat{\rho}=\sum_{\alpha_1,\alpha_2} p_{\alpha_1 \alpha_2} |a,\alpha_1 \rangle \langle |a,\alpha_2 \rangle.$$
Since the matrix elements with respect to all ##|a',\alpha \rangle## with ##a \neq a'## thus vanish, the eigenvectors of ##\hat{\rho}## must be in the eigenspace ##\text{Eig}(a,\alpha)## of ##\hat{A}##, and since ##\hat{A} \hat{\rho}=a \hat{\rho}, \quad \hat{\rho} \hat{A}=\hat{\rho} \hat{A}^{\dagger}=a \hat{\rho}## we have ##[\hat{A},\hat{\rho}]=0## and thus there's a common set of eigenvectors of ##\hat{A}## and ##\hat{\rho}##, and this must span thus ##\text{Eig}(a,\alpha)##. We can assume that the ##|a,\alpha \rangle## are those common eigenvectors, and thus the stat. op. simplifies to
$$\hat{\rho}=\sum_{\alpha} p_{\alpha} |a,\alpha \rangle \langle a,\alpha|.$$
The entropy
$$S=-\sum_{\alpha} p_{\alpha} \ln p_{\alpha}$$
must be maximized under the contraint ##\sum_{\alpha} p_{\alpha}=1##, i.e., with the Lagrange parameter for this constraint we find
$$\delta [S+\Omega(\sum_{\alpha} p_\alpha -1)]=0,$$
where now the ##p_{\alpha}## can be varied independently. Thus we have
$$\sum_{\alpha} \delta p_{\alpha} (-\ln p_{\alpha}-1+\Omega)=0$$
leading to
$$\ln p_{\alpha} =\Omega-1=\text{const}.$$
Thus all the ##p_{\alpha}## must be equal, i.e., if the eigenvalue ##a## is ##d_a## fold degenerate, you must have
$$p_{\alpha}=\frac{1}{d_{\alpha}},$$
and finally
$$\hat{\rho}=\frac{1}{d_{\alpha}} \sum_{\alpha=1}^{d_{\alpha}} |a,\alpha \rangle \langle a,\alpha|.$$
For the special case that the eigenvalue is not degenerate, i.e., ##d_{\alpha}=1## you are again back at the corresponding pure state as discussed above.
 
  • Like
Likes DarMM
  • #27
120
56
Yes, and this is the only entropy that can be interpreted in terms of surprise or gaeof knowledge about outcomes.
A. Neumaier said:
If I measure the up operator on a large sample prepared in state rho, the relevant entropy is that of the diagonal of rho and not that of the eigenvalues of rho. But the latter figures in statistical mechanics!
Since the diagonals of rho and the eigenvalues of rho are the same when rho is expressed in its eigenbasis, and this eigenbasis corresponds to a measurement for which knowledge of possible outcomes is maximal, could we interpret von Neumann entropy in terms of the measurement for which knowledge of possible outcomes is maximal? We would expect the trace to be preserved under rotation into different measurement bases
 
Last edited:
  • #28
DarMM
Science Advisor
Gold Member
2,370
1,397
They define it by analogy to the classical case, following tradition. But they never make use of it. They need the concept of entropy, once they have it they forget about knowledge. It is only a play with words and cannot be made consistent.

In fact the only concise definition of the term knowledge in this context is to equate it with an entropy difference. The latter is well-defined, the former is vague if not defined in this way.

An entangled pure state lacks precisely as much knowledge about predicting the outcomes of up measurements as the mixture obtained by taking its diagonal.
I still don't really see the issue with thinking of it in terms of knowledge or surprise. I mean pick out a context and the resultant Gelfand representation makes it equivalent to the classical case in that context. Hence directly it seems to me that you can think of quantum entropy in similar terms to classical entropy, i.e. how informative/surprising a given measurement outcome is.

Perhaps it would be easier to ask what is entropy in QM then in your view?
 
  • Like
Likes bhobba and vanhees71
  • #29
DarMM
Science Advisor
Gold Member
2,370
1,397
An entangled pure state lacks precisely as much knowledge about predicting the outcomes of up measurements as the mixture obtained by taking its diagonal
Does it? Consider measurements in the Bell basis. The induced sample space for the Bell basis context has a probability distribution with a wider spread.

True there are basis, e.g. ##\{|\uparrow\uparrow\rangle ,|\downarrow\uparrow\rangle ,|\uparrow\downarrow\rangle , |\downarrow\downarrow\rangle\}##, where the spread is the same and thus outcomes are equally informative. However there are basis where those for the mixture are more informative.
 
  • #30
324
43
Hence directly it seems to me that you can think of quantum entropy in similar terms to classical entropy, i.e. how informative/surprising a given measurement outcome is.
According to Mark M. Wilde, it's not that immediate.

/Patrick
 
  • #31
DarMM
Science Advisor
Gold Member
2,370
1,397
According to Mark M. Wilde, it's not that immediate.

/Patrick
What part of the notes indicate that? 11.1 seem to me to just talk about super-dense coding, i.e. the number of (qu)bits is lower. I'm not sure that changes the overall meaning.
 
  • #32
324
43
What part of the notes indicate that? 11.1 seem to me to just talk about super-dense coding, i.e. the number of (qu)bits is lower. I'm not sure that changes the overall meaning.
begin here
1.2.2 A Measure of Quantum Information

The above section discusses Shannon’s notion of a bit as a measure of information. A natural question is whether there is an analogous measure of quantum information. He develops different examples showing the difficulty.

/Patrick
 
  • #33
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,476
3,377
Does it? Consider measurements in the Bell basis. The induced sample space for the Bell basis context has a probability distribution with a wider spread.

True there are basis, e.g. ##\{|\uparrow\uparrow\rangle ,|\downarrow\uparrow\rangle ,|\uparrow\downarrow\rangle , |\downarrow\downarrow\rangle\}##, where the spread is the same and thus outcomes are equally informative. However there are basis where those for the mixture are more informative.
The point is that the state and hence the entropy are properties of the preparation alone, independent of the remaining context, while what are outcomes, hence what knowledge can be gained are properties of the triple (preparation, instance of the ensemble, measurement context). This makes the quantum case different from the classical case, where the outcomes are determined by the pair (preparation,instance of the ensemble).

Thus in the classical case, the mean gain in information, which gives a classical entropy, is a property of the preparation alone. In the quantum case, the entropy is still a property of the preparation alone. But the mean gain in information is only a property of the pair (preparation measurement context). Thus one gets a different entropy for each measurement context, and for an up measurement context, we have what I had described before. This makes it clear that the mean gain in infomation has mathematically quite a different structure than the quantum entropy, and there are no good reasons to equate the two.

Indeed, the mean gain in infomation equals the quantum entropy for no preparation-independent context! And to measure in an eigenbasis of rho itself, where the two agree, is infeasible in all but the simplest situations. This answers @Morbert's query.
 
Last edited:
  • #34
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,476
3,377
Perhaps it would be easier to ask what is entropy in QM then in your view?
Entropy is a basic physical quantity like energy, where we also cannot say what it ''is'' except through the mathematical relations it has to other physical concepts. This notion of entropy was undisputed and already eminently successful for 100 years before Jaynes discovered relations to the concept of information. His interpretation didn't add the slightest to the successes of the entropy concept, hence can safely be regarded as an - in my opinion interesting but often misleading - analogy.

More specifically, in the thermal interpretation, density operators ##\rho## are always positive definite (the pure case is an idealization never exactly realizable), so ##S:=-k\log\rho## is well-defined. It defines a selfadjoint entropy operator ##S##, representing the entropy as a measurable quantity, on par with other operators representing energy, components of spin, momentum, position, etc.. Only the dynamical law is different. As always in the thermal interpetation, the observed value is an approximation of the q-expectation, giving the standard formula for the observable entropy.
 
  • #35
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,476
3,377
It's of course not the von Neumann entropy, which defines another entropy for another "random experiment"
For which random experiment?
 
  • #36
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,476
3,377
You are contradicting yourself, and I'm sure that is by some misunderstanding, because the math is indeed utterly simple and is taught in the 1st semester in the linear-algebra lecture: The trace of a matrix is independent of the basis used to calculate it.
Sure, the math is simple and I didn't make a mistake. I was giving a contexgt where the proper entropy is not the trace. Thus there is no contradiction.
 
  • #37
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,476
3,377
(2) "Improper" mixtures: Assume a system is in a pure state, but you only make a measurement on a subsystem.
In reality, every quantum system is a subsystem of a larger system. Thus one must regard every not artificially mixed state as an improper mixture. These are essentially never in a pure state.

Even in case of a Stern-Gerlach beam (with the second beam blocked), one has strictly speaking not a pure up state since it is impossible to prepare the magnetic field such that it defines a unique up direction.
 
  • #38
DarMM
Science Advisor
Gold Member
2,370
1,397
Well I suppose I still don't see this difference in the quantum and classical case as there are two common ways of looking at the quantum entropy where upon it looks very similar to the classical Shannon entropy. I will write this post a spoiler for those who might not know information theory.

Firstly as has been mentioned above the Shannon entropy for a classical information source which has outcomes ##i=1\dots k## with probability ##p_i## is the "surprise" present asymptotically in a single outcome.

This is roughly because if we assume that the the information source has transmitted a large amount ##N## of outcomes then we can assume that all ##N## sequences of outcomes are "typical" in the sense that each outcome ##i## has occurred a fraction of ##p_i## times in the sequence. Atypical sequences become vanishingly rare due to the law of large numbers so we ignore them.

Combinatorics gives us quite simply that the number of such typical cases is:
$$\Omega = \frac{N}{(n_1)!\dots(n_k)!}$$
with ##n_1## being the number of times outcome ##i=1## occurs, which in a typical sequence is ##Np_1## thus:
$$\Omega = \frac{N}{(Np_1)!\dots(Np_k)!}$$
To find the number of bits needed to encode such a sequence of ##N## outcomes we simply take the log of ##\Omega## and since ##N## is large we can use Sterling's formula to obtain:
$$\ln\left(\Omega\right) = -N\sum_{i=1}^{k}{p_i \ln\left(p_i\right)}$$
Thus the information per bit is:
$$\frac{\ln\left(\Omega\right)}{N} = -\sum_{i=1}^{k}{p_i \ln\left(p_i\right)}$$
The entropy. Thus entropy is asymptotically (i.e. large ##N##) how much a given outcome encoded in bits identifies which sequence of outcomes the source has outputted. Low entropy sources are very likely to produce a small set of sequences so a given bit tells you very little as you can mostly predict the sequence in advance, i.e. there is less surprise/knowledge gained in a single outcome.

To distinguish quantum and classical entropy I will call the former ##S## and the latter ##H##.

In quantum mechanics to have outcomes we need a context, i.e. a complete set of commuting observables. This is nothing more than complimentarity/counterfactual indefiniteness/"unobserved outcomes have no results". Only the quantity we measure has outcomes other variables/observables do not.

We can always map a quantum state ##\rho## into being a classical probability distribution for those observables alone using what is called a Gelfand representation denoted ##G##. Thus to see what classical probability distribution ##\rho## gives for a context (set of commuting observables) we transform ##\rho## into the basis associated with that context and map it with ##G##, i.e. ##G\left(U\rho U^{\dagger}\right)##

Note: In full detail ##G## maps ##\rho## into a probability distribution over the spectrum of the observables associated with the context. For a very simple case of an observable ##A## with eigenvalues ##\lambda_i## it maps ##\rho## into a distribution ##p\left(\lambda_i \right)##

It turns out that quantum entropy obeys:
\begin{align*}
S\left(\rho\right) & = -Tr\left(\rho\ln\left(\rho\right)\right)\\
& = \min_{U}\left[H\left(G\left(U\rho U^{\dagger}\right)\right)\right]
\end{align*}
That is each context has a Shannon entropy and the quantum entropy is lowest entropy among the contexts, i.e. the surprise factor or knowledge gained in the context with the least amount of surprise/greatest predictability of results.

This is no surprise as quantum probability is a generalization of classical probability to the case of multiple entwined sample spaces.

Thus pure states are states of maximal knowledge because they contain one context which is utterly predictable.

Above @atyy mentioned that in some cases there are no pure states. In QFT finite volume systems have no pure states (it is possible that there are no infinite volume pure states either due to coloumb fields in QED, but that is still an open issue). Thus every system has non-zero entropy and this is nothing more than the statement that finite systems treated realistically with QFT have no context with completely predictable outcomes.

Also note here we do not need to conceive of mixed states as uncertainty about pure states. They're just states and pure states are a special case with a totally predictable context.

Now we can understand the entropy of entangled states. The entire state is pure so there is a context where the outcomes are certain, e.g. measurement in the Bell basis for entangled spin-half particles with vanishing total angular momentum. However for somebody with access to only one particle no single particle context has completely predictable outcomes and so the entropy of a single particle is non-zero.

An alternative characterization of entropy is how many bits it takes to faithfully transmit a source of outcomes.

For a classical source ##W## of outcomes the noiseless source coding theorem tells us that a source can be faithfully transmitted with error less than ##\epsilon## if we have access to a resource of ##H\left(W\right) + \delta## bits for each outcome. ##\epsilon## and ##\delta## become smaller as ##N##, the number of outcomes the source generates, increases. The entropy then is a hard limit, the fundamental amount of information or knowledge in an outcome.

Similarly by Schumacher's theorem a quantum source ##\rho## of outcomes can be faithfully transposed (quantum analogue of transmission) with error less than ##\epsilon## if we have access to a resource of ##S\left(\rho\right) + \delta## qubits for each outcome.

Thus it seems to me from two separate views the quantum entropy is very similar to the classical entropy and has a similar "knowledge" or "surprise" based reading. However this knowledge is not to be understood as missing knowledge of a pure state but as the information content of an outcome in the most predictable context of measurements one can perform on the system.

If one feels better saying "information" rather than knowledge I won't argue semantics of these English language words.
 
Last edited:
  • Like
Likes dextercioby, microsansfil, atyy and 1 other person
  • #39
atyy
Science Advisor
14,036
2,330
I still don't really see the issue with thinking of it in terms of knowledge or surprise. I mean pick out a context and the resultant Gelfand representation makes it equivalent to the classical case in that context. Hence directly it seems to me that you can think of quantum entropy in similar terms to classical entropy, i.e. how informative/surprising a given measurement outcome is.

Perhaps it would be easier to ask what is entropy in QM then in your view?
As @A. Neumaier has mentioned, the classical entropy for continuous variables is not invariant under arbitrary smooth transformations, so one has to choose additional conditions to specify it eg. canonically conjugate variables to specify the entropy for Hamiltonian classical mechanics. For classical continuous variables if one needs an invariant notion of information, one must use relative notions such as the mutual information or relative entropy (also called the surprise or Kullback-Leibler divergence), as these make sense for discrete and continuous variables. So in the classical case, surprise is often considered to be different from the entropy.
 
Last edited:
  • Like
Likes DarMM
  • #40
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,476
3,377
Well I suppose I still don't see this difference in the quantum and classical case as there are two common ways of looking at the quantum entropy where upon it looks very similar to the classical Shannon entropy.
I didn't know either of the two results you mention. They make indeed quantum entropy an information theoretic property of the state alone.
Can you please give references?
To distinguish quantum and classical entropy I will call the former ##S## and the latter ##H##.
\begin{align*}
S\left(\rho\right) & = -Tr\left(\rho\ln\left(\rho\right)\right)\\
& = \min_{U}\left[H\left(G\left(U\rho U^{\dagger}\right)\right)\right]
\end{align*}
That is each context has a Shannon entropy and the quantum entropy is lowest entropy among the contexts, i.e. the surprise factor or knowledge gained in the context with the least amount of surprise/greatest predictability of results.

[...] we do not need to conceive of mixed states as uncertainty about pure states. They're just states and pure states are a special case with a totally predictable context.

Now we can understand the entropy of entangled states. The entire state is pure so there is a context where the outcomes are certain, e.g. measurement in the Bell basis for entangled spin-half particles with vanishing total angular momentum. However for somebody with access to only one particle no single particle context has completely predictable outcomes and so the entropy of a single particle is non-zero.

An alternative characterization of entropy is how many bits it takes to faithfully transmit a source of outcomes.

For a classical source ##W## of outcomes the noiseless source coding theorem tells us that a source can be faithfully transmitted with error less than ##\epsilon## if we have access to a resource of ##H\left(W\right) + \delta## bits for each outcome. ##\epsilon## and ##\delta## become smaller as ##N##, the number of outcomes the source generates, increases. The entropy then is a hard limit, the fundamental amount of information or knowledge in an outcome.

Similarly by Schumacher's theorem a quantum source ##\rho## of outcomes can be faithfully transposed (quantum analogue of transmission) with error less than ##\epsilon## if we have access to a resource of ##S\left(\rho\right) + \delta## qubits for each outcome.
What is the meaning of ''faithfully transposed''?
Thus it seems to me from two separate views the quantum entropy is very similar to the classical entropy and has a similar "knowledge" or "surprise" based reading. However this knowledge is not to be understood as missing knowledge of a pure state but as the information content of an outcome in the most predictable context of measurements one can perform on the system.

If one feels better saying "information" rather than knowledge I won't argue semantics of these English language words.
Both classically and qauntum mechanically, information is a much mre neutral word than knowledge, since nothing in your exposition depends on knowledge (in the usual sense) in any way. Information content has a precise, observer-inependent mathematical definition independent of the meaning of the transmitted details, while knowledge is ambiguous, observer-dependent, and meaning-sensitive.

Knowledge is needed only marginally, in that one needs to know the state in order to find the optimal transmission protocol. But one needs to know the state for everything one wants to predict in science, hence this involvement of knowledge is nothing state-specific.

Nevertheless, even wih your interpretation, the following remains valid:
Entropy is a basic physical quantity like energy, where we also cannot say what it ''is'' except through the mathematical relations it has to other physical concepts. This notion of entropy was undisputed and already eminently successful for 100 years before Jaynes discovered relations to the concept of information. His interpretation didn't add the slightest to the successes of the entropy concept, hence can safely be regarded as an - in my opinion interesting but often misleading - analogy.
One cannot faithfully transmit/transpose a macroscopic state, using the resources of a local region in spacetime. Thus information theory does not apply to thermodynamics. And one doesn't need information theory at all to derive thermodynamics from quantum physics.
 
  • Like
Likes DarMM
  • #41
atyy
Science Advisor
14,036
2,330
I'm not sure if @A. Neumaier would agree with my reason, but I too am not a fan of Jaynes. For me Jaynes fails because the Gibbs entropy is not unique, it is only one of the Renyi entropies.
 
  • #42
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,476
3,377
I'm not sure if @A. Neumaier would agree with my reason, but I too am not a fan of Jaynes. For me Jaynes fails because the Gibbs entropy is not unique, it is only one of the Renyi entropies.
The most serious problem with Jaynes' subjectivism is that his maximum entropy principle predicts complete physical nonsense if you assume the knowledge of the expectation of ##H^2## rather than that of ##H##. One gets the correct expressions for the density operator only if one assumes the correct knowledge.

This is a problem like in betting: One succeeds in betting only if the believed subjective probabilities are close to the actual ones. Card game players know. Thus it is not the belief but the amount of agreement with reality that determines the probabilities.
 
  • Like
Likes atyy
  • #43
324
43
Thus it seems to me from two separate views the quantum entropy is very similar to the classical entropy and has a similar "knowledge" or "surprise" based reading. However this knowledge is not to be understood as missing knowledge of a pure state but as the information content of an outcome in the most predictable context of measurements one can perform on the system.
Even if we can find a similar point of view regarding the notion of entropy , communication through a quantum channel cannot be described by the results of classical information theory (in particular the notion of capacity); it requires the generalization of classical information theory by quantum perception of the world.

This article "A Survey on Quantum Channel Capacities" shows the difficulties to federate the different use cases.

From "A Survey on Quantum Channel Capacities" :
Many new capacity definitions exist for quantum channels in comparison to a classical communication channel. In the case of a classical channel, we can send only classical information while quantum channels extend the possibilities, and besides the classical information we can deliver entanglement assisted classical information, private classical information, and of course, quantum information [53], [134]. On the other hand, the elements of classical information theory cannot be applied in general for quantum information –in other words, they can be used only in some special cases. There is no general formula to describe the capacity of every quantum channel model, but one of the main results of the recent researches was a simplified picture in which various capacities of a quantum channel (i.e., the classical, private, quantum) are all non-additive [242].
/Patrick
 
Last edited:
  • #44
DarMM
Science Advisor
Gold Member
2,370
1,397
Even if we can find a similar point of view regarding the notion of entropy , communication through a quantum channel cannot be described by the results of classical information theory (in particular the notion of capacity); it requires the generalization of classical information theory by quantum perception of the world
Certainly, but this has never been in doubt or disputed. To say otherwise would be the daft claim that quantum information theory is no more than classical information theory.
The claim has never been that a quantum information channel can be described by classical information theory, but that there is a similarity between quantum entropy and classical entropy.
 
  • #45
324
43
Certainly, but this has never been in doubt or disputed.
...
but that there is a similarity between quantum entropy and classical entropy.
Yes, but if there is a similarity between quantum entropy and classical entropy, as classical capacity is defined with the use of the concept of mutual information which is linked to the concept of entropy.

Why can't we also establish similarities between quantum capacity and classical capacity?

/Patrick
 
  • #46
DarMM
Science Advisor
Gold Member
2,370
1,397
I didn't know either of the two results you mention. They make indeed quantum entropy an information theoretic property of the state alone.
Can you please give references?
Schumacher's theorem is to be found in:
Schumacher, B., Quantum coding. Phys. Rev. A 51, 2738-2747 (1995).

As for the other theorem I first learned of it in Scott Aaronson's "Quantum Computing Since Democritus", but only a found a proof later in Section 12.2 of
Bengtsson, I., & Zyczkowski, K. (2006). Geometry of Quantum States: An Introduction to Quantum Entanglement. Cambridge: Cambridge University Press

What is the meaning of ''faithfully transposed''?
Schumacher explains this well in his paper.
 
  • Like
Likes vanhees71
  • #47
DarMM
Science Advisor
Gold Member
2,370
1,397
Why can't we also establish similarities between quantum capacity and classical capacity?
That would turn this thread into discussing every difference between Classical and Quantum Information theory. We also don't need the notion of discord in the Classical theory. There certainly are differences that's not in dispute.
 
  • #48
vanhees71
Science Advisor
Insights Author
Gold Member
2019 Award
15,953
7,257
The most serious problem with Jaynes' subjectivism is that his maximum entropy principle predicts complete physical nonsense if you assume the knowledge of the expectation of ##H^2## rather than that of ##H##. One gets the correct expressions for the density operator only if one assumes the correct knowledge.

This is a problem like in betting: One succeeds in betting only if the believed subjective probabilities are close to the actual ones. Card game players know. Thus it is not the belief but the amount of agreement with reality that determines the probabilities.
Why does the MEM lead to "complete physical nonsense" assuming to know the expectation value of ##H^2## instead of assuming to know that of ##H##?

In this case MEM gives the stat. op.
$$\hat{\rho}=\frac{1}{Z} \exp (-\lambda \hat{H}^2).$$
What's wrong with this state?
 
  • #49
A. Neumaier
Science Advisor
Insights Author
2019 Award
7,476
3,377
Why does the MEM lead to "complete physical nonsense" assuming to know the expectation value of ##H^2## instead of assuming to know that of ##H##?

In this case MEM gives the stat. op.
$$\hat{\rho}=\frac{1}{Z} \exp (-\lambda \hat{H}^2).$$
What's wrong with this state?
It is time invariant hence stationary but leads to completely wrong predictions for thermal q-expectations such as the internal energy.
 
  • #50
DarMM
Science Advisor
Gold Member
2,370
1,397
the classical entropy for continuous variables is not invariant under arbitrary smooth transformations, so one has to choose additional conditions to specify it
Nevertheless, even wih your interpretation, the following remains valid
I'm not sure if @A. Neumaier would agree with my reason, but I too am not a fan of Jaynes. For me Jaynes fails because the Gibbs entropy is not unique, it is only one of the Renyi entropies.
Regardless of these issues with the interpretation of classical entropy are we agreed that mixed states are just general quantum states for two reasons:
  1. Their state space is ##Tr\left(\mathcal{H}\right)## not ##\mathcal{L}^{1}\left(\mathcal{H}\right)##, thus they seem not to quantify classical ignorance of a pure state since they cannot be read as probability distributions over pure states

  2. In QFT finite volume systems have no pure states.
Pure states are then just a special case where you have one totally predictable context, they don't constitute "the true state" of which one is ignorant. Such a totally predictable context seems to be absent in QFT, there is always some measurement uncertainty in QFT thus only mixed states.
 

Related Threads on What is really that density matrix in QM?

Replies
50
Views
6K
Replies
1
Views
1K
Replies
59
Views
8K
Replies
92
Views
24K
Replies
58
Views
1K
Replies
25
Views
2K
  • Last Post
4
Replies
89
Views
3K
Top