Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Function of operators. Series, matrices.

  1. May 26, 2014 #1
    Is it possible to write series ##\ln x=\sum_na_nx^n##. I am asking you this because this is Taylor series around zero and ##\ln 0## is not defined.

    And if ##A## is matrix is it possible to write
    ##\ln A=\sum_na_nA^n##. Thanks for the answer!
  2. jcsd
  3. May 26, 2014 #2


    User Avatar
    Science Advisor

    The answer to the first question is no. This forces the second question to also have an answer no.

    For ln if you want a series you can use ln(1+x) or ln(1-x) or ln((1+x)/(1-x)).
  4. May 26, 2014 #3
    Ok. Tnx. But it is ok to write
    ##lnx=\sum_n a_n(x-1)^n ##
    ##lnA=\sum_n a_n(A-1)^n ##
  5. May 26, 2014 #4


    User Avatar
    Science Advisor

    Yes, of course. In fact it is well known that (Taylor's series)
    [tex]ln(x)= \sum_{n=1}^\infty \frac{(-1)^n}{n}(x- 1)^n[/tex]
  6. May 27, 2014 #5
    Ok. Tnx. Is it true also for operators?
  7. May 27, 2014 #6
    Yes, but not for all operators. Only the ones in a certain radius of convergence. Here, that would be that it's ok for all operators with ##\|A\|<1##.

    For a proof, see "the structure and Geometry of Lie groups" by Hilgert and Neeb, chapter 3. Although most functional analysis texts should have this as well.
  8. May 28, 2014 #7
    Thanks a lot. My problem is to calculate ##Tr(A\ln A)##, where ##A## is matrix. Because of that I thought to write ##\ln A## as power series if that is possilble. But here I can not do it in such easy way! Right?
  9. May 28, 2014 #8
    If ##\|A\|<1##, then you can do it this way. Otherwise, you will need to tell us what your definition of ##\ln A## is.
  10. May 28, 2014 #9
    Is A by any chance a density matrix (ie are you calculating the von Neumann entropy)? If so, its properties guarantee that you can use its spectral decomposition to write the trace as [itex]\mathrm{Tr} \rho \ln \rho = \sum_i \lambda_i \ln{\lambda_i}[/itex] for the eigenvalues [itex]\lambda_i[/itex] of the density matrix, which is often a useful form.
  11. May 28, 2014 #10
    When can I use spectral decomposition? Is it true only for Hermitian matrices, or not? Also what will happen in case of degenerated spectrum?
  12. May 28, 2014 #11
    It exists for all normal operators, so any Hermitian matrices are fine. For a degenerate spectrum, you can use the general spectral decomposition: [itex]A=\sum_{i=1}^{n}\sum_{d=1}^{D(i)} \lambda_i |\lambda_i,d\rangle\langle\lambda_i,d|[/itex], where D(i) is the degeneracy of eigenvalue [itex]\lambda_i[/itex], so that the vectors [itex]{|\lambda_i,d\rangle}[/itex] form an orthonormal basis on the corresponding D(i)-dimensional eigenspace.

    Note that other properties of the density matrix would guarantee that the logarithm is always well-defined: The operator norm of a bounded normal operator equals its spectral radius, and if this is indeed a density operator, its matrix elements are non-negative and [itex]\sum_i \lambda_i = 1[/itex]. So, the condition micromass mentioned, ||A||<1, is fulfilled, unless the density matrix describes a pure state, that is, one of the eigenvalues of A equals one, in which case the von Neumann entropy vanishes.
  13. May 28, 2014 #12
    In case of degeneracy I can not find eigenvectors in unique form. So I think that I can not find operator in unique form. For example look the case in ##\mathbb{R}^3##, ##\lambda_1=\lambda_2=\lambda_3=1## and eigenvectors
    ##x_1^T=(1 \quad 0 \quad 0)##,##x_2^T=(0 \quad 0 \quad 0)##, ##x_3^T=(0 \quad 0 \quad 0)##. How to find matrix in this case?
  14. May 28, 2014 #13
    Eigenvectors cannot be zero vectors.

    (In addition, if we're still considering density matrices, that's not going to be a valid density matrix since the sum of eigenvalues is greater than 1.)
  15. May 28, 2014 #14


    User Avatar
    Science Advisor
    Gold Member

  16. Jun 3, 2014 #15


    User Avatar
    Science Advisor

    Yes, most texts specify that eigenvectors must be non-zero. But that leaves us having to say "the set of all eigenvectors of a linear transformation, together with the zero vector, form a sub-space" and "the set of all eigenvectors corresponding to a given eigenvector, together with the zero vector, form a subspace."

    A few texts say "[itex]\lambda[/itex] is an eigenvalue of linear transformation A if and only if there is a non-zero vector, v, such that [itex]Av= \lambda v[/itex]" and then "v is an eigenvector of A, corresponding to eigenvalue [itex]\lambda[/itex], if and only if [itex]Av= \lambda v[/itex]" which does NOT require that an eigenvector be non-zero. That allows us to say simply "the set of all eigenvectors of a linear transformation form a subspace" without having to add the zero vector. A small point but I see no reason to refuse to allow the zero vector as an eigenvector.
  17. Jun 5, 2014 #16
    Really? Mind giving a reference? I'm not realy skeptical, just curious, since I have never seen that convention being used in a text (either in linear algebra or in functional analysis), and this is the second time someone commented about that on a post I made. I guess the main difference is that you give up the uniqueness of the eigenvalue of a set eigenvector, although I wouldn't be surprised if more exceptions arise.
    Last edited: Jun 5, 2014
  18. Jun 5, 2014 #17
    I have never seen a math text that allows zero as an eigenvector. I would like to see some references for this too.
  19. Jun 5, 2014 #18


    User Avatar
    Homework Helper

    In my usage, an eigenvector corresponding to an eigenvalue [itex]\lambda[/itex] is a non-zero [itex]v \in V[/itex] such that [itex]Av = \lambda v[/itex]. The eigenspace corresponding to the eigenvalue [itex]\lambda[/itex] is the set of all [itex]v \in V[/itex] such that [itex]Av = \lambda v[/itex]; the eigenspace is, as the name suggests, a subspace of [itex]V[/itex].
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook