# Von Neumann entropy for "similar" pvm observables

• I
forkosh
TL;DR Summary
Why is the entropy different if the pvm's use the same resolution of the identity?
The von Neumann entropy for an observable can be written ##s=-\sum\lambda\log\lambda##, where the ##\lambda##'s are its eigenvalues. So suppose you have two different pvm observables, say ##A## and ##B##, that both represent the same resolution of the identity, but simply have different eigenvalues, with ##\lambda_{A_i}>\lambda_{B_i}## always. Then ##s_A>s_B##, but why should that be?

If they both represent the same resolution of the identity, then exactly the same experimental apparatus measures them both. Just change the labels on the pointer dial from the ##A##-values to the ##B##-values. For example, the ##A##-measurement could be mass in grams, whereas ##B## is simply in kilograms. Why should the entropy of those two measurements be any different?

## Answers and Replies

Gold Member
There is no such thing as von Neumann entropy of an observable. The von Neumann entropy is an entropy of a state, represented by a positive density matrix ##\rho## which satisfies ##{\rm Tr}{\rho}=1##. Due to the latter condition, it's impossible that all eigenvalues of one ##\rho## are larger than all eigenvalues of another ##\rho## in the same Hilbert space.

Last edited:
Klystron and vanhees71
$$S=-k_{\text{B}} \langle \ln \hat{\rho}=-k_{\text{B}} \mathrm{Tr}[\hat{\rho} \ln \hat{\rho}],$$