# Microcanonical ensemble density matrix

Ref: R.K Pathria Statistical mechanics (third edition sec 5.2A)
First it is argued that the density matrix for microcanonical will be diagonal with all diagonal elements equal in the energy representation. Then it is said that this general form should remain the same in all representations. i.e all the off diagonal elements zero and the diagonal elements all equal to one another.
This is my question. Why should the form necessarily be the same in all representations? To say that all diagonal elements (in any representation) are equal means that the probabilities for measuring all eigenvalues of any operator is the same. We may argue that the probability for measuring any energy eigenvalue within the specified range is the same (based on equal a priori probabilities).

• atyy

atyy
Pathria doesn't say that. He says in other representations the density matrix is not diagonal. He does say it will still be symmetric.

Pathria doesn't say that. He says in other representations the density matrix is not diagonal. He does say it will still be symmetric.

Thanks. But Pathria does say it. Please refer to page 119,120 in statistical mechanics third edition by Pathria. These pages correspond to section 5.2. It is given so in other editions too. (page 118 in first edition, page 108 in second edition)
In 119,
\rho_{mn} = \rho_n \delta_{mn} ------- (1)
\rho_n = 1/ \Gamma for each of the accessible states
0 for all other states.
In page 120,
"The density matrix in the energy representation is then given by equations (1) and (2)." If we now change over to any other representation, the general form of the density matrix should remain the same, namely (i) the off-diagonal elements should continue to be zero, while (ii) the diagonal elements (over the allowed range) should continue to be equal to one another."

In fact he invokes the postulate of random a priori phases to ensure the nondiagonal elements zero. Is it valid for eigenstates of any operator or just the energy eigenstates?

Last edited:
atyy
I see. I looked up http://ocw.mit.edu/courses/physics/8-333-statistical-mechanics-i-statistical-mechanics-of-particles-fall-2007/lecture-notes/lec21.pdf [Broken] (p136) and http://www.jamia-physics.net/lecnotes/statmech/lec09.pdf (p2) and it seems the random phase assumption is applied in the energy basis.

Last edited by a moderator:
• Seban87
Yes, I found the postulate of random phases being applied to energy representation else where too. But Pathria has the density matrix automatically diagonal in energy basis and makes it diagonal in all other basis by applying this postulate. This is what I find confusing. In lecture 9 of the link you gave it is said that the energy eigenstates have a special status by this postulate...

What I find most confusing is the equality of the diagonal elements of density matrix in all representations. I am not able to understand this.

kith
If the off-diaganol elements are all zero and the diagonal elements are all equal, your density matrix is proportional to the identity matrix. The identity matrix can be decomposed into eigenstates of any observable, so the density matrix looks the same for all observables.

From a statistical / information-theoretic point of view, such a density matrix corresponds to a situation where you know nothing about your system. So you just assign equal probabilities to all possible (pure) states.

Also for really high temperatures, the density matrix of all systems approximately has this form. For $T \rightarrow \infty$, the Boltzmann factors $$\exp\left(-\frac{E_n}{kT} \right)$$ become all equal to $1$, so the equilibrium density matrix is -up to a normalization constant- equal to the identity matrix.

I don't know the exact context of your question because I don't have the book.

Last edited:
If the off-diaganol elements are all zero and the diagonal elements are all equal, your density matrix is proportional to the identity matrix. The identity matrix can be decomposed into eigenstates of any observable, so the density matrix looks the same for all observables.

That's right. But The density matrix is made diagonal in this book by invoking the postulate of random phases. The diagonal elements are said to be equal due to the postulate of equal a priori probabilities. My question then is this. Does the postulate of equal a priori probabilities imply that the all eigenstates of any operator have the same probabilities? Or is it a statement about energy eigenstates only?

I understand what you said. But do we know a priori that the density matrix for microcanonical ensemble is proportional to the identity matrix? This is what Pathria seems to imply. Wouldn't it depend on the states of the system and the basis chosen?

vanhees71
Gold Member
I think, this book by Pathria is pretty enigmatic, although I've not looked at it in great detail. A clear distinction between abstract Hilbert-space vectors and representations, is mandatory for a well-understandable derivation. I can just copy my answer of a private communication with the OP:

I don't understand what Pathria is saying in the beginning of this section at all. Let's tranlate it to the usual notation of quantum statistics. The microcanonical ensemble is correctly described in words: It's a closed system of fixed particle number and volume and an energy in a finite (!) interval $E \in (E_0-\Delta/2,E_0+\Delta/2)$. Now let's assume a system with a continuous energy spectrum as is usual for, e.g., an ideal gas. Then the microcanonical statistical operator is
$$\hat{\rho}=\frac{1}{\Delta} \int_{E_0-\Delta/2}^{E_0+\Delta/2} \mathrm{d} E |E \rangle \langle E|.$$
In general, if written in another basis, it's not diagonal anymore. Why should it be?

The statistical operator of a pure state is
$$\hat{\rho}_{\psi}=|\psi \rangle \langle \psi|,$$
where $|\psi \rangle$ is normalized to 1. Of course $\hat{\rho}_{\psi}$ is a projection operator, and any positively semidefinite self-adjoint operator with trace 1 represents a pure state if and only if it's a projection operator. It's of course also only diagonal in this one representation and not for any basis.

I don't understand the notation of Eq. (5) in the book. Wrt. to an arbitrary basis $|u_n$ the matrix elements are
$$\rho_{\psi n_1n_2}=\langle u_{n_1}|\hat{\rho}_{\psi}|u_{n_2} \rangle=\psi(n_1) \psi^*(n_2),$$
where
$$\psi(n)=\langle u_n |\psi \rangle.$$
So I guess that's what he means with $a_n$ in his strange notation.

kith
That's right. But The density matrix is made diagonal in this book by invoking the postulate of random phases. The diagonal elements are said to be equal due to the postulate of equal a priori probabilities. My question then is this. Does the postulate of equal a priori probabilities imply that the all eigenstates of any operator have the same probabilities?
If it worked, the argument would go like this
$$\rho = \sum_n p |E_n\rangle \langle E_n| = p \sum_n |E_n\rangle \langle E_n| = p 1 = p \sum_i |a_i\rangle \langle a_i| = \sum_i p |a_i\rangle \langle a_i$$
where $|a_i\rangle$ are the eigenstates of an arbitrary observable $A$.

But reading vanhees post, I realized that I wasn't talking about the microcanonical ensemble at all. Since there's only a small number of energy eigenstates present in the mixture, we are far from getting the identity matrix. I too think that the statement is false.

Last edited:
vanhees71
Gold Member
No! The sum over $n$ is of course not complete, because then you'd have $\hat{\rho} \propto \hat{1}$, which is the probability distribution for complete ignorance and doesn't make sense at all, because you can not normalize its trace to 1 (except if you are in a finite dimensional space of states like for a spin observable or something similar).

For a discrete momentum spectrum (e.g., a particle or many particles in an harmonic oscillator potential), the whole thing is easier than my example. Then you have
$$\hat{\rho}_{\text{micro}}=\frac{1}{N} \sum_{k=1}^{N} |E_k \rangle \langle E_k|.$$
This is motivated from an information theoretical point of view by maximizing the entropy for the case that you know that the energy of your system takes one of the values $\{E_1,\ldots ,E_N \}$ but for sure not one of the values $E_k$ with $k \in \{N+1,N_2,\ldots\}$.

Then in any other basis $|a_i \rangle$ you have
$$\rho_{\text{micro} ij}=\langle a_i |\hat{\rho}_{\text{micro}}|a_j \rangle = \frac{1}{N} \sum_{k=1}^{N} \langle a_i|E_k \rangle \langle E_k|a_j \rangle.$$
The matrix in the representation wrt. to the new basis is not necessarily diagonal!

Last edited:
• Seban87
kith
No! The sum over $n$ is of course not complete, because then you'd have $\hat{\rho} \propto \hat{1}$, which is the probability distribution for complete ignorance and doesn't make sense at all, because you can not normalize its trace to 1 (except if you are in a finite dimensional space of states like for a spin observable or something similar).
Yes, that assumption was implicit in my equation. So it's a bad equation for two reasons, thanks for pointing it out.

="Then in any other basis $|a_i \rangle$ you have
$$\rho_{\text{micro} ij}=\langle a_i |\hat{\rho}_{\text{micro}}|a_j \rangle = \frac{1}{N} \sum_{k=1}^{N} \langle a_i|E_k \rangle \langle E_k|a_j \rangle.$$
The matrix in the representation wrt. to the new basis is not necessarily diagonal!

Yes. It is now that Pathria invokes the postulate of random phases to make it diagonal. The average of $$c_n c_m ^*$$becomes zero due to random phases among of the states. ie.
$$\langle c_n c_m^* \rangle = c_nc_m^* \langle e^{i(\theta_n - \theta_m)} \rangle = c_n c_m^* \delta_{nm}= |c_n|^2.$$.
In order to make all diagonal elements equal the postulate of equal a priori probabilities is invoked. Thus $$|c_n|^2 = |c|^2$$ for all n.
Is the postulate of random phases true for eigenstates of all operators or just energy eigen states?
Does the postulate of equal a priori probabilities say that all eigenstates of any operator are equally probable to be realized under a measurement? I think the question boils down to the validity and exact meaning of these to statements.

Last edited:
That's intuitive: Since you don't know more than the energy range within which the system is prepared, you only know it must be in an eigenstate of such an energy, and since you don't know more, none of these states is somehow different from the others, and thus the probability distribution with the least prejudice is to assume that they are equally probable and that there are no further correlations in the system. This leads to the microcanonical statistical operator.

In a correspondence with one of my professors I understand that observables cannot distinguish states within the microcanonical subspace. So the density matrix in this subspace is always going to be proportional to the identity matrix.
Also I see that what vanhees71 says here has to be applicable to eigenvectors of all other operators too. So all of them have to be taken to be equally probable. This, I think, explains the fact that all diagonal elements have to be equal. Now the random phases resulting from interactions with the external world would make sure that all nondiagonal elements are zero. Conclusion: The density matrix in the microcanonical subspace is always proportional to the identity matrix.

vanhees71
Gold Member
No! You know something about the system, when considering the microcanonical ensemble, namely that the energy is for sure in a certain interval, and this makes the energy eigenstates special and that's why, according to the maximum-entropy principle, the statistical operator is diagonal in the energy-eigenbasis and that the states with the eigenvalues in the given interval are equally probable and all others have 0 probability. This does not imply that the same is the case of any other observable and that's why the statistical operator of the microcanonical ensemble is usually not diagonal wrt. the eigenstates of another observable!

Another thing is, when you measure an observable and find it to be in a certain interval. Then, due to decoherence via interactions of the quantum system with the measurement apparatus and/or "the environment" the phases are averaged out, and you assign another statistical operator, again using the maximum entropy principle, adapted to the new knowledge you gained through the measurement of the observable. If this observable is not compatible with the energy (i.e., if it is not a conserved quantity) the new statistical operator is no longer a microcanonical statistical operator.

• Seban87
Here is another explanation from a correspondence with one of the authors. Now this seems simple. If the density matrix is proportional to the identity matrix in the energy representation then it has to be so in any other representation. This is because the matrices in different bases have to be connected by unitary transformations and the unitary transformation of a matrix proportional to identity matrix will give the same matrix. (That is as long as we are confined to the microcanonical subspace.

vanhees71
Gold Member
The canonical density matrix is not proportional to the identity operator. In general, the identity operator cannot be a statistical operator at all, because it hasn't finite trace (except you are in a finitely-dimensional Hilbert space; then it still refers to the minimal possible information possible and not to the microcanonical ensemble, where the energy is known to be in a certain (small) range of possible values).

The canonical density matrix is not proportional to the identity operator. In general, the identity operator cannot be a statistical operator at all, because it hasn't finite trace (except you are in a finitely-dimensional Hilbert space; then it still refers to the minimal possible information possible and not to the microcanonical ensemble, where the energy is known to be in a certain (small) range of possible values).
Not the Canonical density matrix. I was referring solely to microcanonical ensemble.

Let us see what goes wrong in the following reasoning.
1)
For a discrete momentum spectrum (e.g., a particle or many particles in an harmonic oscillator potential), the whole thing is easier than my example. Then you have
$$\hat{\rho}_{\text{micro}}=\frac{1}{N} \sum_{k=1}^{N} |E_k \rangle \langle E_k|.$$
This is motivated from an information theoretical point of view by maximizing the entropy for the case that you know that the energy of your system takes one of the values $\{E_1,\ldots ,E_N \}$ but for sure not one of the values $E_k$ with $k \in \{N+1,N_2,\ldots\}$.

2)
Then, using the von-Neumann entropy as the measure of missing information (which can be motivated mathematically quite convincingly), you find that then each energy eigenstate with an energy value in the known range must have equal probability. That's intuitive: Since you don't know more than the energy range within which the system is prepared, you only know it must be in an eigenstate of such an energy, and since you don't know more, none of these states is somehow different from the others, and thus the probability distribution with the least prejudice is to assume that they are equally probable and that there are no further correlations in the system. This leads to the microcanonical statistical operator.

So in the energy representation, the microcanonical density matrix has the following form. $$\rho_{m,n} = \frac{1}{N} \delta_{m,n}$$. Clearly here it is proportional to the identity operator.

Now the representation in any other representation has to be related to this matrix by a unitary transformation. Making a Unitary transformation of this matrix will again give us the same form, won't it?, because the unitary transformation of the identity matrix is again the identity matrix.

So, doesn't this reasoning imply that it has to be proportional to the identity matrix in any representation? If not what has gone wrong in the above?

Perhaps this whole thing is true for a discrete basis...

kith