Function of operators. Series, matrices.

LagrangeEuler
Messages
711
Reaction score
22
Is it possible to write series ##\ln x=\sum_na_nx^n##. I am asking you this because this is Taylor series around zero and ##\ln 0## is not defined.

And if ##A## is matrix is it possible to write
##\ln A=\sum_na_nA^n##. Thanks for the answer!
 
Physics news on Phys.org
The answer to the first question is no. This forces the second question to also have an answer no.

For ln if you want a series you can use ln(1+x) or ln(1-x) or ln((1+x)/(1-x)).
 
Ok. Tnx. But it is ok to write
##lnx=\sum_n a_n(x-1)^n ##
right?
or
##lnA=\sum_n a_n(A-1)^n ##
 
Yes, of course. In fact it is well known that (Taylor's series)
ln(x)= \sum_{n=1}^\infty \frac{(-1)^n}{n}(x- 1)^n
 
Ok. Tnx. Is it true also for operators?
 
LagrangeEuler said:
Ok. Tnx. Is it true also for operators?

Yes, but not for all operators. Only the ones in a certain radius of convergence. Here, that would be that it's ok for all operators with ##\|A\|<1##.

For a proof, see "the structure and Geometry of Lie groups" by Hilgert and Neeb, chapter 3. Although most functional analysis texts should have this as well.
 
Thanks a lot. My problem is to calculate ##Tr(A\ln A)##, where ##A## is matrix. Because of that I thought to write ##\ln A## as power series if that is possilble. But here I can not do it in such easy way! Right?
 
LagrangeEuler said:
Thanks a lot. My problem is to calculate ##Tr(A\ln A)##, where ##A## is matrix. Because of that I thought to write ##\ln A## as power series if that is possilble. But here I can not do it in such easy way! Right?

If ##\|A\|<1##, then you can do it this way. Otherwise, you will need to tell us what your definition of ##\ln A## is.
 
LagrangeEuler said:
Thanks a lot. My problem is to calculate ##Tr(A\ln A)##, where ##A## is matrix. Because of that I thought to write ##\ln A## as power series if that is possilble. But here I can not do it in such easy way! Right?

Is A by any chance a density matrix (ie are you calculating the von Neumann entropy)? If so, its properties guarantee that you can use its spectral decomposition to write the trace as \mathrm{Tr} \rho \ln \rho = \sum_i \lambda_i \ln{\lambda_i} for the eigenvalues \lambda_i of the density matrix, which is often a useful form.
 
  • #10
DeIdeal said:
Is A by any chance a density matrix (ie are you calculating the von Neumann entropy)? If so, its properties guarantee that you can use its spectral decomposition to write the trace as \mathrm{Tr} \rho \ln \rho = \sum_i \lambda_i \ln{\lambda_i} for the eigenvalues \lambda_i of the density matrix, which is often a useful form.

When can I use spectral decomposition? Is it true only for Hermitian matrices, or not? Also what will happen in case of degenerated spectrum?
 
  • #11
LagrangeEuler said:
When can I use spectral decomposition? Is it true only for Hermitian matrices, or not? Also what will happen in case of degenerated spectrum?

It exists for all normal operators, so any Hermitian matrices are fine. For a degenerate spectrum, you can use the general spectral decomposition: A=\sum_{i=1}^{n}\sum_{d=1}^{D(i)} \lambda_i |\lambda_i,d\rangle\langle\lambda_i,d|, where D(i) is the degeneracy of eigenvalue \lambda_i, so that the vectors {|\lambda_i,d\rangle} form an orthonormal basis on the corresponding D(i)-dimensional eigenspace.

Note that other properties of the density matrix would guarantee that the logarithm is always well-defined: The operator norm of a bounded normal operator equals its spectral radius, and if this is indeed a density operator, its matrix elements are non-negative and \sum_i \lambda_i = 1. So, the condition micromass mentioned, ||A||<1, is fulfilled, unless the density matrix describes a pure state, that is, one of the eigenvalues of A equals one, in which case the von Neumann entropy vanishes.
 
  • #12
DeIdeal said:
It exists for all normal operators, so any Hermitian matrices are fine. For a degenerate spectrum, you can use the general spectral decomposition: A=\sum_{i=1}^{n}\sum_{d=1}^{D(i)} \lambda_i |\lambda_i,d\rangle\langle\lambda_i,d|, where D(i) is the degeneracy of eigenvalue \lambda_i, so that the vectors {|\lambda_i,d\rangle} form an orthonormal basis on the corresponding D(i)-dimensional eigenspace.
In case of degeneracy I can not find eigenvectors in unique form. So I think that I can not find operator in unique form. For example look the case in ##\mathbb{R}^3##, ##\lambda_1=\lambda_2=\lambda_3=1## and eigenvectors
##x_1^T=(1 \quad 0 \quad 0)##,##x_2^T=(0 \quad 0 \quad 0)##, ##x_3^T=(0 \quad 0 \quad 0)##. How to find matrix in this case?
 
  • #13
Eigenvectors cannot be zero vectors.

(In addition, if we're still considering density matrices, that's not going to be a valid density matrix since the sum of eigenvalues is greater than 1.)
 
  • #15
DeIdeal said:
Eigenvectors cannot be zero vectors.

(In addition, if we're still considering density matrices, that's not going to be a valid density matrix since the sum of eigenvalues is greater than 1.)
Yes, most texts specify that eigenvectors must be non-zero. But that leaves us having to say "the set of all eigenvectors of a linear transformation, together with the zero vector, form a sub-space" and "the set of all eigenvectors corresponding to a given eigenvector, together with the zero vector, form a subspace."

A few texts say "\lambda is an eigenvalue of linear transformation A if and only if there is a non-zero vector, v, such that Av= \lambda v" and then "v is an eigenvector of A, corresponding to eigenvalue \lambda, if and only if Av= \lambda v" which does NOT require that an eigenvector be non-zero. That allows us to say simply "the set of all eigenvectors of a linear transformation form a subspace" without having to add the zero vector. A small point but I see no reason to refuse to allow the zero vector as an eigenvector.
 
  • #16
HallsofIvy said:
Yes, most texts specify that eigenvectors must be non-zero. But that leaves us having to say "the set of all eigenvectors of a linear transformation, together with the zero vector, form a sub-space" and "the set of all eigenvectors corresponding to a given eigenvector, together with the zero vector, form a subspace."

A few texts say "\lambda is an eigenvalue of linear transformation A if and only if there is a non-zero vector, v, such that Av= \lambda v" and then "v is an eigenvector of A, corresponding to eigenvalue \lambda, if and only if Av= \lambda v" which does NOT require that an eigenvector be non-zero. That allows us to say simply "the set of all eigenvectors of a linear transformation form a subspace" without having to add the zero vector. A small point but I see no reason to refuse to allow the zero vector as an eigenvector.

Really? Mind giving a reference? I'm not really skeptical, just curious, since I have never seen that convention being used in a text (either in linear algebra or in functional analysis), and this is the second time someone commented about that on a post I made. I guess the main difference is that you give up the uniqueness of the eigenvalue of a set eigenvector, although I wouldn't be surprised if more exceptions arise.
 
Last edited:
  • #17
I have never seen a math text that allows zero as an eigenvector. I would like to see some references for this too.
 
  • #18
In my usage, an eigenvector corresponding to an eigenvalue \lambda is a non-zero v \in V such that Av = \lambda v. The eigenspace corresponding to the eigenvalue \lambda is the set of all v \in V such that Av = \lambda v; the eigenspace is, as the name suggests, a subspace of V.
 
Back
Top