Ensembles and density matrices - I don't get it

  • I
  • Thread starter Erland
  • Start date
  • #1
Erland
Science Advisor
738
136
I'm trying to learn some basic quantum mechanics, mostly from I a mathematical perspective. I am trying to understand this with quantum states as vectors in a Hilbert space, bipartite systems, the difference between superpositions and ensembles of states, and density matrices. And it is the two last items I simply don't get. I have been reading the Wkipedia, Hugh Everett's thesis and this paper.

Following the latter paper, p. 16 ff., we assume that we have a bipartite system ##\mathcal H_A\otimes\mathcal H_B##, where ##\mathcal H_A## and ##\mathcal H_B## are finite dimensional complex Hilbert spaces. ##\{|i_A\rangle\}## and ##\{|\mu_B\rangle\}## are orthonormal bases of ##\mathcal H_A## and ##\mathcal H_B##, respectively. A pure state in ##\mathcal H_A\otimes\mathcal H_B## can then be expanded as ##\psi_{AB}=\sum_{i,\mu}a_{i\mu}|i\rangle_A\otimes|\mu\rangle_B##, with ##\sum_{i,\mu}|a_{i\mu}|^2=1##.
Let ##\mathbf M_A## be a Hermitian operator on ##\mathcal H_A## representing an observable, with eigenvalues ##\lambda_k## and an orthonormal eigenbasis ##|k\rangle_A## (for simplicity, we assume that all the eigenvalues are nondegenerate, i.e. their eigenspaces have dimension 1).
Now, a measurement of this observable upon a bipartite state ##|\psi\rangle_{AB}## is represented by the operator ##\mathbf M_A\otimes \mathbf 1_B##, where ##\mathbf 1_B## is the identity operator on ##\mathcal H_B##.

In the paper (just as Everett) the issue is then to calculate the expectation of the observable ##\mathbf M_A\otimes \mathbf 1_B##, and from this construct a density matrix/operator and conclude that this represents en ensemble of states, not a superposition. This what I don't understand, and moreover, I don't see the point with it.

All the above is based upon a calculation of the expectation of the observable. But why bother so much about the expectation? Isn't it more important to find the probabilities for the different outcomes of the observation, and the possible states after the observation, and their probabilities?

This is how I would do it: First, I would express ##|\psi\rangle_{AB}## in the basis ##\{|k\rangle_A\otimes|\mu\rangle_B\}## instead of ##\{|i\rangle_A\otimes|\mu\rangle_B\}##:
##|\psi\rangle_{AB}=\sum_{k,\mu}b_{i\mu}|k\rangle_A\otimes|\mu\rangle_B##. The ##b_{k\mu}##:s are obtained from the ##a_{i\mu}##:s by a unitarian coordinate transformation.

Then, measurement of the ##\mathbf M_A##-observable is performed by applying the ##\mathbf M_A\otimes \mathbf 1_B##-observable upon the given state. The outcome of this observation is ##\lambda_k## with probability ##\sum_\mu |b_{k\mu}|^2##, and in this case (assuming that this probability is nonzero), the state after the observation is ##|k\rangle_A\otimes\frac1{\sum_\mu |b_{k\mu}|^2 }\sum_{\mu}b_{k\mu}|\mu\rangle_B##.

So, I don't understand why density matrices/operators and ensembles are needed here...
 

Answers and Replies

  • #2
stevendaryl
Staff Emeritus
Science Advisor
Insights Author
8,412
2,596
All the above is based upon a calculation of the expectation of the observable. But why bother so much about the expectation? Isn't it more important to find the probabilities for the different outcomes of the observation, and the possible states after the observation, and their probabilities?
Well, knowing expectation values and knowing probabilities is basically the same thing. Suppose you have some operator [itex]A[/itex] with a complete set of eigenstates [itex]|\psi_i\rangle[/itex], with eigenvalue [itex]\alpha_i[/itex]. (That is, [itex]A|\psi_i\rangle = \alpha_i |\psi_i\rangle[/itex]. Then for each [itex]i[/itex] there is a corresponding projection operator [itex]\Pi_{A, \alpha_j}[/itex] where [itex]\Pi_{A, \alpha_j} |\psi_k\rangle = |\psi_k\rangle[/itex] if [itex]j = k[/itex], and is zero, otherwise. Then the expectation value of [itex]\Pi_{A, \alpha_j}[/itex] in a general state [itex]|\psi\rangle[/itex] (not necessarily an eigenstate of [itex]A[/itex]) will be equal to the probability that a measurement of [itex]A[/itex] would return [itex]\alpha_j[/itex].

So there is no loss of generality in talking about expectation values. For lots of systems that involve many particles, the expectation value is more readily measurable than probabilities. The other thing about expectation values, is that computing expectation values for an operator doesn't require you to figure out the complete set of eigenstates and eigenvalues. So it's often easier to compute.
 
  • #3
Erland
Science Advisor
738
136
Well, knowing expectation values and knowing probabilities is basically the same thing. Suppose you have some operator [itex]A[/itex] with a complete set of eigenstates [itex]|\psi_i\rangle[/itex], with eigenvalue [itex]\alpha_i[/itex]. (That is, [itex]A|\psi_i\rangle = \alpha_i |\psi_i\rangle[/itex]. Then for each [itex]i[/itex] there is a corresponding projection operator [itex]\Pi_{A, \alpha_j}[/itex] where [itex]\Pi_{A, \alpha_j} |\psi_k\rangle = |\psi_k\rangle[/itex] if [itex]j = k[/itex], and is zero, otherwise. Then the expectation value of [itex]\Pi_{A, \alpha_j}[/itex] in a general state [itex]|\psi\rangle[/itex] (not necessarily an eigenstate of [itex]A[/itex]) will be equal to the probability that a measurement of [itex]A[/itex] would return [itex]\alpha_j[/itex].

So there is no loss of generality in talking about expectation values. For lots of systems that involve many particles, the expectation value is more readily measurable than probabilities. The other thing about expectation values, is that computing expectation values for an operator doesn't require you to figure out the complete set of eigenstates and eigenvalues. So it's often easier to compute.
OK, but then we must assume that that we have availability to several operators for measurements, such as the projection operators you mentioned, not only one operator ##A##. But in a measurement situation, we measure one observable, not several which we can choose as we please, or...?
 
  • #4
atyy
Science Advisor
14,066
2,366
You can (in principle) measure commuting observables simultaneously.
 
  • #5
strangerep
Science Advisor
3,168
1,015
I have been reading the Wkipedia, Hugh Everett's thesis and this paper.
See my (2nd) signature line... :wink:
 
  • Like
Likes bhobba, vanhees71 and DrClaude
  • #6
stevendaryl
Staff Emeritus
Science Advisor
Insights Author
8,412
2,596
I think it's worth going through the mathematics as to why density matrices show up whenever you have entanglement between two different systems.

Suppose that we have a two-component system described by a composite wave function [itex]|\Psi\rangle[/itex], which can be written as a superposition of basis elements: [itex]|\Psi\rangle = \sum_{\alpha, j} c_{\alpha j} |\alpha, j\rangle[/itex], where [itex]\alpha[/itex] refers to the state of the first subsystem, and [itex]j[/itex] refers to the second subsystem. The expectation value of an observable [itex]O[/itex] is given by: [itex]\langle O \rangle = \langle \Psi|O|\Psi\rangle = \sum_{\alpha, \beta, j, k} c^*_{\alpha j} c_{\beta k} \langle \alpha, j |O|\beta, k\rangle[/itex]. But now, let's suppose that the observable [itex]O[/itex] only involves the second subsystem. That means that it has the form [itex]\mathbf{1}_A \otimes O_B[/itex], which means that its matrix elements have the form: [itex]\langle \alpha, j|O|\beta, k \rangle = \delta_{\alpha,\beta} O_{jk}[/itex]. So

[itex]\langle O \rangle = \sum_{\alpha, \beta, j, k} c^*_{\alpha j} c_{\beta k} \delta_{\alpha, \beta} O_{jk} = \sum_{\alpha, j, k} c^*_{\alpha j} c_{\alpha k} O_{jk}[/itex]

Now, note that if we define [itex]\rho^B_{kj} \equiv \sum_{\alpha} c^*_{\alpha j} c_{\alpha k}[/itex], then this can be written:

[itex]\langle O \rangle = \sum_k (\sum_j \rho^B_{kj} O_{jk}) = \sum_k (\rho^B O)_{kk} \equiv tr(\rho^B O)[/itex]

So the reduced density matrix [itex]\rho^B[/itex] is exactly the mathematical object needed to be able to compute expectation values for operators that only involve the second subsystem.
 
  • #7
Erland
Science Advisor
738
136
So the reduced density matrix [itex]\rho^B[/itex] is exactly the mathematical object needed to be able to compute expectation values for operators that only involve the second subsystem.
In what sense is it "exactly the mathematical object needed"? Why can't we alternatively use the method in my OP?

strangrep, I found Ballentine on the net, I'll look it up.
 
  • #8
stevendaryl
Staff Emeritus
Science Advisor
Insights Author
8,412
2,596
In what sense is it "exactly the mathematical object needed"? Why can't we alternatively use the method in my OP?
You can. It amounts to the same thing, except that your approach requires carrying around the baggage of System A, even though it is irrelevant for measurements of System B. The reduced matrix for system B eliminates all reference to system A, (since it's irrelevant for measurements on B).
 
  • #9
Erland
Science Advisor
738
136
You can. It amounts to the same thing, except that your approach requires carrying around the baggage of System A, even though it is irrelevant for measurements of System B. The reduced matrix for system B eliminates all reference to system A, (since it's irrelevant for measurements on B).
But we still need the coordinates of the other system in the calculation of the elements of the density matrix.
 
  • #10
stevendaryl
Staff Emeritus
Science Advisor
Insights Author
8,412
2,596
But we still need the coordinates of the other system in the calculation of the elements of the density matrix.
If you know the state for the composite system, then you can use it to compute the density matrix for the subsystem of interest. The point is that once having done that, the other system never comes into play again (for measurements involving just the subsystem).

In practice, very often you don't know the state of the composite system. But the density matrix for the subsystem can be estimated statistically based on measurements (under the assumption that the situation is reproducible).
 
  • #11
Erland
Science Advisor
738
136
Ok, thanks. I'm reading Ballentine...
 

Related Threads on Ensembles and density matrices - I don't get it

  • Last Post
Replies
3
Views
2K
Replies
6
Views
2K
Replies
20
Views
3K
Replies
6
Views
2K
  • Last Post
Replies
9
Views
2K
  • Last Post
Replies
10
Views
2K
Replies
6
Views
2K
  • Last Post
Replies
2
Views
2K
Replies
4
Views
881
Replies
13
Views
2K
Top