On the definition of Von Neumann entropy

Click For Summary
SUMMARY

The Von Neumann entropy is defined as S(ρ) = -tr(ρ log ρ), where ρ is the density matrix. The logarithm of a matrix can be defined using a series expansion: log A = ∑_{k=1}^∞ (-1)^{k+1} (A - I)^k / k, applicable for matrices where the series converges. For non-diagonal matrices, diagonalization simplifies the calculation, allowing the entropy to be expressed as S(ρ) = tr(D M), where M contains the logarithms of the eigenvalues of ρ. This method is confirmed as a standard approach for calculating Von Neumann entropy.

PREREQUISITES
  • Understanding of density matrices in quantum mechanics
  • Familiarity with matrix logarithms and series expansions
  • Knowledge of eigenvalues and eigenvectors
  • Basic concepts of linear algebra
NEXT STEPS
  • Study the properties of density matrices in quantum mechanics
  • Learn about matrix logarithms and their applications
  • Explore diagonalization techniques for matrices
  • Investigate the implications of Von Neumann entropy in quantum information theory
USEFUL FOR

Quantum physicists, mathematicians, and anyone involved in quantum information theory or studying the properties of quantum systems will benefit from this discussion.

univector
Messages
15
Reaction score
0
I am confused by the definition of the Von Neumann entropy. In Nielson and Chung's book page 510, the Von Neumann entropy is defined as
S (\rho) = - tr(\rho \log \rho)
where \rho is the density matrix. What is the definition of the logrithm of a matrix? Is it some series expansion of a matrix, or an element-by-element logrithm?

Thanks.
 
Physics news on Phys.org
Note that

\frac{d}{dx}\log(1+x)=\frac{1}{1+x}=1-x+x^2-x^3+\cdots

Integrate.

\log(1+x)=x-\frac{x^2}{2}+\frac{x^3}{3}-\frac{x^4}{4}+\cdots

Now set y=1+x.

\log y=(y-1)-\frac{(y-1)^2}{2}+\cdots=\sum_{k=1}^\infty(-1)^{k+1}\frac{(y-1)^k}{k}

This suggests that if A is a matrix, we can define log A by

\log A=\sum_{k=1}^\infty(-1)^{k+1}\frac{(A-I)^k}{k}

for all matrices A such that the series converges. More information here.
 
Last edited:
Hi Fredrik,

Thanks for the explanation. If the matrix \rho is not diagonal, it is not trivial to calculate the matrix polynomial series. Instead, if we do a similary transform to diagonize \rho first, things may become easier. Let the diagonal matrix be D. Then by using the series expansion forward (for a matrix) and backward (for a number), we can come up with
S(\rho) = tr (D M),
where
M = \left( \begin{array}{cccc} \log \lambda_1 &amp; 0 &amp; 0 &amp; ... \\<br /> 0 &amp; \log \lambda_2 &amp; 0 &amp; ... \\<br /> ... &amp; ... &amp; ... &amp; ... \\<br /> 0 &amp; ... &amp; 0 &amp; \log \lambda_n<br /> \end{array} \right)
where \lambda_i is the ith engenvalue of matrix \rho (or D), and n is the number of rows (also columns) of \rho .

Is this the way people calculate S(\rho)?
 
Yes.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K