Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Nth roots of a matrix

  1. Dec 16, 2009 #1
    Matlab help state that the square root of [tex]X = \begin{pmatrix} 7 & 10 \\ 15 & 22 \end{pmatrix}[/tex]


    [tex]A = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}[/tex] , [tex]B = \begin{pmatrix} 1.5667 & 1.7408 \\ 2.6112 & 4.1779 \end{pmatrix}[/tex]
    , C=-A and D=-B .

    When I used the matlab command expm(0.5*logm(X)) to compute the square root of X, I obtained the matrix B.

    My questions:
    1. Does it make sense to define the nth root for any given square matrix X ?

    2. If it does, in general how many A are there such that An=X ?

    3. If I am to use the command expm(logm(X)/n) to compute [itex]\sqrt[n]{X}[/itex] which answer will I get.
  2. jcsd
  3. Dec 17, 2009 #2
    A has the negative squareroot of one of the eigenvalues, B has the positive.
    Code (Text):
    should be the same if they did not change it.
    Consider the following
    Code (Text):

    eig([7 10;15 22]);
    eig(expm(logm([7 10;15 22])/4))
    They should give the same answer. The idea is very similar to having [tex]A = P\sqrt{\Lambda} P^{-1}\underbrace{P\sqrt{\Lambda}P^{-1}}_\sqrt{A}[/tex]

    I guess it is unique for matrices that has all positive eigenvalues but it has been some time since I have read about whether it was unique or not nevertheless, I recommend
    N. Higham, “Computing real square roots of a real matrix,” Linear Algebra Applications, vol. 88, pp. 405–430, 1987
    Last edited: Dec 17, 2009
  4. Dec 17, 2009 #3
    Thank you trambolin for your reply and suggestion on http://reference.kfupm.edu.sa/content/c/o/computing_real_square_roots_of_a_real_ma_89991.pdf" [Broken]. This is a bit heavy for me. I might take a long time to really understand the paper.
    From my preliminary reading, it look like a square matrix can has infinitely many square root but a nonsingular Jordan block (whatever it means) has precisely two square roots. I presume the existence of nth root will be nontrivial.

    I know there is such a matlab command sqrtm. I'm thinking of expm(logm(X)/n) because I'm not sure whether matlab has built-in function for nth root in general.
    So in general the command expm(logm(X)/n) will gives a matrix whose eigenvalues are all positive (but not sure whether it is unique).

    When you write [itex]\sqrt{A} = P\sqrt{\Lambda}P^{-1}} [/itex] what actually are P and[itex]\sqrt{\Lambda}} [/itex] ?

    One more question. If I see a matrix such as A3/2, is computing [itex] (\sqrt{A})^3 [/itex] always equal to [itex] \sqrt{(A^3)} [/itex] ?
    Last edited by a moderator: May 4, 2017
  5. Dec 17, 2009 #4
    Argh, I hate when other people do it but sometimes it is really done without knowing I guess. sorry for the weird response.

    First , [itex]
    \sqrt{A} = P\sqrt{\Lambda}P^{-1}}
    [/itex] is the eigenvalue decomposition. So that if your matrix is diagonalized (forget about non-diagonalizable matrices, for a second) then you obtain [itex]
    A = P\Lambda P^{-1}}
    [/itex] In Matlab, you can get it by simply
    Code (Text):
    [P,Lambda] = eig(A);
    P*Lambda*inv(P) - A
    the result should be very small. (P is invertible only if the matrix is diagonalizable!) So, the idea is to obtain the matrix P then, just by taking the squareroot of the diagonal eigenvalue matrix [itex]\Lambda[/itex], we obtain one possible squareroot of A. But this uniqueness is a little bit complicated. A matrix squareroot of a matrix generally defined similar to the scalar squareroot function. [itex]\sqrt{x^2} = |x|[/itex]. So the matter of uniqueness is closely related to the convention you impose.
    For nondiagonalizable matrices, Jordan blocks are involved but the technique is essentially the same. Whenever the matrix has negative eigenvalues, then the story gets complicated from the beginning due to the non-uniqueness of a squareroot of a complex number etc. You can check the http://en.wikipedia.org/wiki/Square_root_of_a_matrix" [Broken] for additional sources.
    Last edited by a moderator: May 4, 2017
  6. Dec 17, 2009 #5
    sorry I forgot the last question. My piece of advice is to check everytime what the author is defining to be on the safe side.
  7. Dec 18, 2009 #6
    >> A=[7 10; 15 22]
    >> [P,Lambda] = eig(A)
    >> lam3=Lambda.^(1/3)
    >> P*lam3*inv(P) % this produce the matrix same as expm(logm(A)/3) :smile:

    To make life simple, I will just make sure that all eigenvalues of A are positive. :biggrin:
    If all eigenvalues are positive, is it true in general that
    expm(logm(A)/n) = P*Lambda.^(1/n)*inv(P) ??

    Your last advice will be adhere.
  8. Dec 18, 2009 #7
    I would say for MATLAB yes! And I would like to base this on the help comments of expm and logm commands.

    Turns out to be coded by Higham himself. Funny coincidence. just type
    Code (Text):

    edit expm.m
  9. Dec 18, 2009 #8
    >> type expm

    Yes, I saw the name. Couldn't be coincidence. He's the expert! Who else could possibly wrote the program.

    I gone through my work again. Matrix A5/2 most probably means [itex]\sqrt{A}\cdot A^2[/itex]

    Thank you again for your help.:smile:
  10. Dec 12, 2010 #9


    User Avatar


    How does what you say on matrix square roots relate to the following similar problem:

    X = A*A^T

    Where X is a given (real or complex) symmetric matrix, T denotes transpose.

    Any literature on this? How many Solutions etc.

    Thanks, Seb
  11. Dec 12, 2010 #10


    User Avatar
    Science Advisor
    Homework Helper

    There is always a solution in complex arithmetic, because any symmetric matrix X can be decomposed into

    X = LDL^T (where L is lower triangular with 1s on the diagonal, and D is diagonal)
    and then
    A = LD^(1/2)

    From this you can produce an infinite number of solutions
    X = (AR)(R^TA^T)
    for any orthonormal matrix R (i.e. R^T = R^-1)
    Last edited: Dec 12, 2010
  12. Dec 13, 2010 #11


    User Avatar

    Thanks AlephZero! Very interesting ;-)

    > There is always a solution in complex arithmetic, because any symmetric matrix X can be
    > decomposed into
    > X = LDL^T (where L is lower triangular with 1s on the diagonal, and D is diagonal)
    Is L and D unique? How to obtain L and D (e.g. in Matlab) or name of composition?

    > and then
    > A = LD^(1/2)
    LD^(1/2)D^(1/2)L^T = A A^T cool!
    So again is A (of size n*n) unique? and how does it matter if X is real or complex ("in complex arithmetic")?

    > From this you can produce an infinite number of solutions
    > X = (AR)(R^TA^T)
    > for any orthonormal matrix R (i.e. R^T = R^-1)
    I saw from searching the web that R is 'the group O(n)'.
    So I guess there is a systematic way to express the R matrix as a linear combination of 'the basis' for O(n), then you explicitly see the degree of freedoms for A.
    I saw from the web that for example in 2*2, one can use sin(x) and cos(x), to build rotation and reflection matrices.

    In the originalo problem:
    X = A A^T
    Also solutions where A is of size n*m would be possible?
    Then size of R has to be m*m (or maybe more general m*p?).

    Regards! Seb
  13. Dec 14, 2010 #12


    User Avatar
    Science Advisor
    Homework Helper

    L and D are unique. I don't know what this is called in Matlab. The algorithm is the same as the LDU decomposition for a non-symmetric matrix. The LDU decomposition is a standard method for solving linear equations so it would be very surprising if it isn't in Matlab!

    No, because a diagonal matrix D usually has many square roots. There are two possible values for each non-zero diagonal term so there could be as many as 2^n different square roots of a matrix of order n.

    If X is positive semi-definite, then all the diagonals of D will be non-negative. In that case, if you take all the terms of D^(1/2) as non-negative, LD^(1/2) is the same as the Cholesky decomposition of X.

    If X is real then L and D are real. However D^(1/2) can not be real if are some elements of D are negative.
    If X is complex then in general L and D are both complex matrices.

    You seem a bit confused here. A basis is associated with a vector space, not a group. I'm not sure what you are trying to say.

    True, and that is one way to construct orthonormal matrices.

    The anwer is obviously yes (think about the simplest case where X is a 1x1 matrix), but I wasn't thinking about that situation when I wrote the earlier post.
    Last edited: Dec 14, 2010
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook