1. Apr 12, 2008

### jdstokes

Due to the probabilistic nature of measurement in quantum mechanics, one invevitably needs to introduce the concept of an ensemble in order to make a well-defined statement about the outcome of a measurement, namely the expectation is introduced which is defined as the average result of measurement of an ensemble of identically prepared systems.

Density matices find utility when the ensemble under consideration consists of more than one quantum-mechanical state. In this case one must distinguish between the coherent superposition within each state and the incoherent superposition of different states which make up the ensemble. Thus one introduces the density operator $\hat{\rho} = \sum_i w_i| \psi_i\rangle \langle \psi |$. The ensemble expectation is then obtained independently of basis by taking the trace of $\hat{A}\hat{\rho}$.

I am wondering how the density matrix formalism manages to avoid any mention of joint hilbert spaces considering that it deals with more than one quantum-mechanical system. Is it necessary to assume that the states which make up the system are non-interacting? If so, then the state of the systemwould be described by some infinite tensor product state $\otimes_{i=1}^\infty |\alpha_i\rangle \in \mathcal{H}^{\otimes\infty}$. Why is it that with finite numbers of particles it is important to deal with the combined Hilbert space but with infinite ensembles we manage to avoid this issue completely?

Last edited: Apr 12, 2008
2. Apr 12, 2008

### Ken G

I don't think the key issue is finite vs. infinite, as one could certainly use a density matrix formalism on a finite sum of particles and calculate not only expectation values but also standard deviations around those expectations. I think the key idea is indeed the assumption that we need not track any coherences between the individual particle states. Note the same assumption appears whenever you treat a closed system-- we know the system has a history of some kind that will effectively open it, be we choose not to trace that history so we have only the wave function of a set of electrons in an atom, for example. Since the electrons in that atom are indistinguishable from every other electron in the universe, in principle we need the wave function of the entire universe to find the energy levels of a single atom, but in practice we don't need that because we are treating an atom whose coherences with its history were either erased by how we prepared the atom, or were averaged over when we chose our analysis mode. In other words, the density matrix approach can be applied probabilistically to a single particle, the key is we obtain the distribution by averaging over all the ways the particle could be prepared in what we are calling "the identical initial state". That averaging process breaks the pure state, and allows us to extend that distribution to any number of copies of a single-particle Hilbert space, rather than one joint Hilbert space containing everything that could possibly have contributed to the preparation of those particles.

3. Apr 14, 2008

### genneth

There is no need to invoke ensembles when talking about probabilities. The state of a system is just a mathematical encoding of the knowledge the observer has about the system. No more, no less.