Function of operators. Series, matrices.

Click For Summary

Discussion Overview

The discussion revolves around the representation of the natural logarithm function as a series, particularly in the context of matrices and operators. Participants explore the validity of expressing ##\ln x## and ##\ln A## as power series, the conditions under which these representations hold, and the implications for calculating traces involving logarithmic operators.

Discussion Character

  • Exploratory
  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • Some participants question whether the series representation ##\ln x = \sum_n a_n x^n## is valid, noting that ##\ln 0## is undefined.
  • Others assert that for the logarithm, series representations are valid for functions like ##\ln(1+x)## or ##\ln(1-x)##.
  • It is proposed that the series can be expressed as ##\ln x = \sum_n a_n (x-1)^n## and similarly for matrices, which some participants confirm as correct.
  • Participants discuss the conditions under which the series representation for operators is valid, specifically mentioning a radius of convergence related to the operator norm.
  • Concerns are raised about calculating ##Tr(A \ln A)## and whether the series representation simplifies this calculation, with some suggesting that it is only valid under certain conditions.
  • Questions arise regarding the use of spectral decomposition for density matrices and the implications of degeneracy in eigenvalues.
  • Some participants clarify that eigenvectors cannot be zero vectors and discuss the implications for density matrices and eigenvalue uniqueness.
  • There is a debate over the definition of eigenvectors, with some arguing for the inclusion of zero vectors in certain contexts, while others maintain that eigenvectors must be non-zero.

Areas of Agreement / Disagreement

Participants express differing views on the validity of series representations for logarithmic functions and operators, particularly regarding conditions for convergence and the implications for matrix calculations. The discussion remains unresolved on several points, particularly concerning the definition and treatment of eigenvectors.

Contextual Notes

Limitations include the dependence on definitions of logarithmic functions for matrices, the conditions under which series representations are valid, and the implications of degeneracy in eigenvalues for spectral decomposition.

LagrangeEuler
Messages
711
Reaction score
22
Is it possible to write series ##\ln x=\sum_na_nx^n##. I am asking you this because this is Taylor series around zero and ##\ln 0## is not defined.

And if ##A## is matrix is it possible to write
##\ln A=\sum_na_nA^n##. Thanks for the answer!
 
Physics news on Phys.org
The answer to the first question is no. This forces the second question to also have an answer no.

For ln if you want a series you can use ln(1+x) or ln(1-x) or ln((1+x)/(1-x)).
 
Ok. Tnx. But it is ok to write
##lnx=\sum_n a_n(x-1)^n ##
right?
or
##lnA=\sum_n a_n(A-1)^n ##
 
Yes, of course. In fact it is well known that (Taylor's series)
[tex]ln(x)= \sum_{n=1}^\infty \frac{(-1)^n}{n}(x- 1)^n[/tex]
 
Ok. Tnx. Is it true also for operators?
 
LagrangeEuler said:
Ok. Tnx. Is it true also for operators?

Yes, but not for all operators. Only the ones in a certain radius of convergence. Here, that would be that it's ok for all operators with ##\|A\|<1##.

For a proof, see "the structure and Geometry of Lie groups" by Hilgert and Neeb, chapter 3. Although most functional analysis texts should have this as well.
 
Thanks a lot. My problem is to calculate ##Tr(A\ln A)##, where ##A## is matrix. Because of that I thought to write ##\ln A## as power series if that is possilble. But here I can not do it in such easy way! Right?
 
LagrangeEuler said:
Thanks a lot. My problem is to calculate ##Tr(A\ln A)##, where ##A## is matrix. Because of that I thought to write ##\ln A## as power series if that is possilble. But here I can not do it in such easy way! Right?

If ##\|A\|<1##, then you can do it this way. Otherwise, you will need to tell us what your definition of ##\ln A## is.
 
LagrangeEuler said:
Thanks a lot. My problem is to calculate ##Tr(A\ln A)##, where ##A## is matrix. Because of that I thought to write ##\ln A## as power series if that is possilble. But here I can not do it in such easy way! Right?

Is A by any chance a density matrix (ie are you calculating the von Neumann entropy)? If so, its properties guarantee that you can use its spectral decomposition to write the trace as [itex]\mathrm{Tr} \rho \ln \rho = \sum_i \lambda_i \ln{\lambda_i}[/itex] for the eigenvalues [itex]\lambda_i[/itex] of the density matrix, which is often a useful form.
 
  • #10
DeIdeal said:
Is A by any chance a density matrix (ie are you calculating the von Neumann entropy)? If so, its properties guarantee that you can use its spectral decomposition to write the trace as [itex]\mathrm{Tr} \rho \ln \rho = \sum_i \lambda_i \ln{\lambda_i}[/itex] for the eigenvalues [itex]\lambda_i[/itex] of the density matrix, which is often a useful form.

When can I use spectral decomposition? Is it true only for Hermitian matrices, or not? Also what will happen in case of degenerated spectrum?
 
  • #11
LagrangeEuler said:
When can I use spectral decomposition? Is it true only for Hermitian matrices, or not? Also what will happen in case of degenerated spectrum?

It exists for all normal operators, so any Hermitian matrices are fine. For a degenerate spectrum, you can use the general spectral decomposition: [itex]A=\sum_{i=1}^{n}\sum_{d=1}^{D(i)} \lambda_i |\lambda_i,d\rangle\langle\lambda_i,d|[/itex], where D(i) is the degeneracy of eigenvalue [itex]\lambda_i[/itex], so that the vectors [itex]{|\lambda_i,d\rangle}[/itex] form an orthonormal basis on the corresponding D(i)-dimensional eigenspace.

Note that other properties of the density matrix would guarantee that the logarithm is always well-defined: The operator norm of a bounded normal operator equals its spectral radius, and if this is indeed a density operator, its matrix elements are non-negative and [itex]\sum_i \lambda_i = 1[/itex]. So, the condition micromass mentioned, ||A||<1, is fulfilled, unless the density matrix describes a pure state, that is, one of the eigenvalues of A equals one, in which case the von Neumann entropy vanishes.
 
  • #12
DeIdeal said:
It exists for all normal operators, so any Hermitian matrices are fine. For a degenerate spectrum, you can use the general spectral decomposition: [itex]A=\sum_{i=1}^{n}\sum_{d=1}^{D(i)} \lambda_i |\lambda_i,d\rangle\langle\lambda_i,d|[/itex], where D(i) is the degeneracy of eigenvalue [itex]\lambda_i[/itex], so that the vectors [itex]{|\lambda_i,d\rangle}[/itex] form an orthonormal basis on the corresponding D(i)-dimensional eigenspace.
In case of degeneracy I can not find eigenvectors in unique form. So I think that I can not find operator in unique form. For example look the case in ##\mathbb{R}^3##, ##\lambda_1=\lambda_2=\lambda_3=1## and eigenvectors
##x_1^T=(1 \quad 0 \quad 0)##,##x_2^T=(0 \quad 0 \quad 0)##, ##x_3^T=(0 \quad 0 \quad 0)##. How to find matrix in this case?
 
  • #13
Eigenvectors cannot be zero vectors.

(In addition, if we're still considering density matrices, that's not going to be a valid density matrix since the sum of eigenvalues is greater than 1.)
 
  • #15
DeIdeal said:
Eigenvectors cannot be zero vectors.

(In addition, if we're still considering density matrices, that's not going to be a valid density matrix since the sum of eigenvalues is greater than 1.)
Yes, most texts specify that eigenvectors must be non-zero. But that leaves us having to say "the set of all eigenvectors of a linear transformation, together with the zero vector, form a sub-space" and "the set of all eigenvectors corresponding to a given eigenvector, together with the zero vector, form a subspace."

A few texts say "[itex]\lambda[/itex] is an eigenvalue of linear transformation A if and only if there is a non-zero vector, v, such that [itex]Av= \lambda v[/itex]" and then "v is an eigenvector of A, corresponding to eigenvalue [itex]\lambda[/itex], if and only if [itex]Av= \lambda v[/itex]" which does NOT require that an eigenvector be non-zero. That allows us to say simply "the set of all eigenvectors of a linear transformation form a subspace" without having to add the zero vector. A small point but I see no reason to refuse to allow the zero vector as an eigenvector.
 
  • #16
HallsofIvy said:
Yes, most texts specify that eigenvectors must be non-zero. But that leaves us having to say "the set of all eigenvectors of a linear transformation, together with the zero vector, form a sub-space" and "the set of all eigenvectors corresponding to a given eigenvector, together with the zero vector, form a subspace."

A few texts say "[itex]\lambda[/itex] is an eigenvalue of linear transformation A if and only if there is a non-zero vector, v, such that [itex]Av= \lambda v[/itex]" and then "v is an eigenvector of A, corresponding to eigenvalue [itex]\lambda[/itex], if and only if [itex]Av= \lambda v[/itex]" which does NOT require that an eigenvector be non-zero. That allows us to say simply "the set of all eigenvectors of a linear transformation form a subspace" without having to add the zero vector. A small point but I see no reason to refuse to allow the zero vector as an eigenvector.

Really? Mind giving a reference? I'm not really skeptical, just curious, since I have never seen that convention being used in a text (either in linear algebra or in functional analysis), and this is the second time someone commented about that on a post I made. I guess the main difference is that you give up the uniqueness of the eigenvalue of a set eigenvector, although I wouldn't be surprised if more exceptions arise.
 
Last edited:
  • #17
I have never seen a math text that allows zero as an eigenvector. I would like to see some references for this too.
 
  • #18
In my usage, an eigenvector corresponding to an eigenvalue [itex]\lambda[/itex] is a non-zero [itex]v \in V[/itex] such that [itex]Av = \lambda v[/itex]. The eigenspace corresponding to the eigenvalue [itex]\lambda[/itex] is the set of all [itex]v \in V[/itex] such that [itex]Av = \lambda v[/itex]; the eigenspace is, as the name suggests, a subspace of [itex]V[/itex].
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 0 ·
Replies
0
Views
9K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 23 ·
Replies
23
Views
4K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K