What is the process for finding nth roots of a matrix in MATLAB?

  • Context: MATLAB 
  • Thread starter Thread starter matematikawan
  • Start date Start date
  • Tags Tags
    Matrix Roots
Click For Summary

Discussion Overview

The discussion revolves around the process of finding nth roots of matrices in MATLAB, specifically focusing on square roots and their properties. Participants explore theoretical aspects, computational methods, and the implications of eigenvalue decomposition in relation to matrix roots.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants question whether it makes sense to define the nth root for any square matrix and how many distinct matrices A exist such that An=X.
  • There is a suggestion that the MATLAB commands expm(0.5*logm(X)) and sqrtm(X) should yield the same results, with a focus on the behavior of eigenvalues.
  • One participant notes that a square matrix can have infinitely many square roots, while a nonsingular Jordan block has precisely two square roots.
  • Concerns are raised about the uniqueness of matrix square roots, particularly in relation to negative eigenvalues and the implications for complex numbers.
  • Participants discuss the eigenvalue decomposition and its role in computing matrix square roots, including the use of the matrix P and the diagonal matrix Λ.
  • There is mention of the relationship between matrix square roots and the decomposition of symmetric matrices, with references to the LDL^T form and the uniqueness of L and D.
  • Some participants explore the implications of using orthonormal matrices in generating multiple solutions for the equation X = A*A^T.

Areas of Agreement / Disagreement

Participants express varying opinions on the uniqueness of matrix roots, the conditions under which they exist, and the computational methods to obtain them. No consensus is reached on these topics, indicating a range of competing views and uncertainties.

Contextual Notes

Discussions include limitations regarding the assumptions made about eigenvalues, the definitions of matrix roots, and the potential for non-uniqueness in certain cases. The complexity of the topic is acknowledged, particularly in relation to non-diagonalizable matrices and the implications of complex arithmetic.

Who May Find This Useful

This discussion may be of interest to those studying linear algebra, numerical methods, or anyone looking to understand the computational aspects of matrix operations in MATLAB.

matematikawan
Messages
336
Reaction score
0
Matlab help state that the square root of X = \begin{pmatrix} 7 & 10 \\ 15 & 22 \end{pmatrix}

are

A = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} , B = \begin{pmatrix} 1.5667 & 1.7408 \\ 2.6112 & 4.1779 \end{pmatrix}
, C=-A and D=-B .

When I used the MATLAB command expm(0.5*logm(X)) to compute the square root of X, I obtained the matrix B.

My questions:
1. Does it make sense to define the nth root for any given square matrix X ?

2. If it does, in general how many A are there such that An=X ?

3. If I am to use the command expm(logm(X)/n) to compute \sqrt[n]{X} which answer will I get.
 
Physics news on Phys.org
A has the negative squareroot of one of the eigenvalues, B has the positive.
Code:
expm(0.5*logm(X))
sqrtm(X)
should be the same if they did not change it.
Consider the following
Code:
eig([7 10;15 22]);
eig(expm(logm([7 10;15 22])/4))
ans.^4

They should give the same answer. The idea is very similar to having A = P\sqrt{\Lambda} P^{-1}\underbrace{P\sqrt{\Lambda}P^{-1}}_\sqrt{A}

I guess it is unique for matrices that has all positive eigenvalues but it has been some time since I have read about whether it was unique or not nevertheless, I recommend
N. Higham, “Computing real square roots of a real matrix,” Linear Algebra Applications, vol. 88, pp. 405–430, 1987
 
Last edited:
Thank you trambolin for your reply and suggestion on http://reference.kfupm.edu.sa/content/c/o/computing_real_square_roots_of_a_real_ma_89991.pdf" . This is a bit heavy for me. I might take a long time to really understand the paper.
From my preliminary reading, it look like a square matrix can has infinitely many square root but a nonsingular Jordan block (whatever it means) has precisely two square roots. I presume the existence of nth root will be nontrivial.

I know there is such a MATLAB command sqrtm. I'm thinking of expm(logm(X)/n) because I'm not sure whether MATLAB has built-in function for nth root in general.
So in general the command expm(logm(X)/n) will gives a matrix whose eigenvalues are all positive (but not sure whether it is unique).

When you write \sqrt{A} = P\sqrt{\Lambda}P^{-1}} what actually are P and\sqrt{\Lambda}} ?

One more question. If I see a matrix such as A3/2, is computing (\sqrt{A})^3 always equal to \sqrt{(A^3)} ?
 
Last edited by a moderator:
Argh, I hate when other people do it but sometimes it is really done without knowing I guess. sorry for the weird response.

First , <br /> \sqrt{A} = P\sqrt{\Lambda}P^{-1}} <br /> is the eigenvalue decomposition. So that if your matrix is diagonalized (forget about non-diagonalizable matrices, for a second) then you obtain <br /> A = P\Lambda P^{-1}} <br /> In Matlab, you can get it by simply
Code:
[P,Lambda] = eig(A);
P*Lambda*inv(P) - A
the result should be very small. (P is invertible only if the matrix is diagonalizable!) So, the idea is to obtain the matrix P then, just by taking the squareroot of the diagonal eigenvalue matrix \Lambda, we obtain one possible squareroot of A. But this uniqueness is a little bit complicated. A matrix squareroot of a matrix generally defined similar to the scalar squareroot function. \sqrt{x^2} = |x|. So the matter of uniqueness is closely related to the convention you impose.
For nondiagonalizable matrices, Jordan blocks are involved but the technique is essentially the same. Whenever the matrix has negative eigenvalues, then the story gets complicated from the beginning due to the non-uniqueness of a squareroot of a complex number etc. You can check the http://en.wikipedia.org/wiki/Square_root_of_a_matrix" for additional sources.
 
Last edited by a moderator:
sorry I forgot the last question. My piece of advice is to check everytime what the author is defining to be on the safe side.
 
>> A=[7 10; 15 22]
>> [P,Lambda] = eig(A)
>> lam3=Lambda.^(1/3)
>> P*lam3*inv(P) % this produce the matrix same as expm(logm(A)/3) :smile:

To make life simple, I will just make sure that all eigenvalues of A are positive. :biggrin:
If all eigenvalues are positive, is it true in general that
expm(logm(A)/n) = P*Lambda.^(1/n)*inv(P) ??


Your last advice will be adhere.
 
I would say for MATLAB yes! And I would like to base this on the help comments of expm and logm commands.

Turns out to be coded by Higham himself. Funny coincidence. just type
Code:
edit expm.m
 
>> type expm

Yes, I saw the name. Couldn't be coincidence. He's the expert! Who else could possibly wrote the program.


I gone through my work again. Matrix A5/2 most probably means \sqrt{A}\cdot A^2

Thank you again for your help.:smile:
 
Hi!

How does what you say on matrix square roots relate to the following similar problem:

X = A*A^T

Where X is a given (real or complex) symmetric matrix, T denotes transpose.

Any literature on this? How many Solutions etc.

Thanks, Seb
 
  • #10
Seb said:
Hi!

How does what you say on matrix square roots relate to the following similar problem:

X = A*A^T

Where X is a given (real or complex) symmetric matrix, T denotes transpose.

There is always a solution in complex arithmetic, because any symmetric matrix X can be decomposed into

X = LDL^T (where L is lower triangular with 1s on the diagonal, and D is diagonal)
and then
A = LD^(1/2)

From this you can produce an infinite number of solutions
X = (AR)(R^TA^T)
for any orthonormal matrix R (i.e. R^T = R^-1)
 
Last edited:
  • #11
Thanks AlephZero! Very interesting ;-)

> There is always a solution in complex arithmetic, because any symmetric matrix X can be
> decomposed into
> X = LDL^T (where L is lower triangular with 1s on the diagonal, and D is diagonal)
Is L and D unique? How to obtain L and D (e.g. in Matlab) or name of composition?

> and then
> A = LD^(1/2)
LD^(1/2)D^(1/2)L^T = A A^T cool!
So again is A (of size n*n) unique? and how does it matter if X is real or complex ("in complex arithmetic")?

> From this you can produce an infinite number of solutions
> X = (AR)(R^TA^T)
> for any orthonormal matrix R (i.e. R^T = R^-1)
I saw from searching the web that R is 'the group O(n)'.
So I guess there is a systematic way to express the R matrix as a linear combination of 'the basis' for O(n), then you explicitly see the degree of freedoms for A.
I saw from the web that for example in 2*2, one can use sin(x) and cos(x), to build rotation and reflection matrices.

In the originalo problem:
X = A A^T
Also solutions where A is of size n*m would be possible?
Then size of R has to be m*m (or maybe more general m*p?).

Regards! Seb
 
  • #12
Seb said:
Thanks AlephZero! Very interesting ;-)
> There is always a solution in complex arithmetic, because any symmetric matrix X can be
> decomposed into
> X = LDL^T (where L is lower triangular with 1s on the diagonal, and D is diagonal)
Is L and D unique? How to obtain L and D (e.g. in Matlab) or name of composition?
L and D are unique. I don't know what this is called in Matlab. The algorithm is the same as the LDU decomposition for a non-symmetric matrix. The LDU decomposition is a standard method for solving linear equations so it would be very surprising if it isn't in Matlab!

> and then
> A = LD^(1/2)
LD^(1/2)D^(1/2)L^T = A A^T cool!
So again is A (of size n*n) unique?
No, because a diagonal matrix D usually has many square roots. There are two possible values for each non-zero diagonal term so there could be as many as 2^n different square roots of a matrix of order n.

If X is positive semi-definite, then all the diagonals of D will be non-negative. In that case, if you take all the terms of D^(1/2) as non-negative, LD^(1/2) is the same as the Cholesky decomposition of X.

and how does it matter if X is real or complex ("in complex arithmetic")?
If X is real then L and D are real. However D^(1/2) can not be real if are some elements of D are negative.
If X is complex then in general L and D are both complex matrices.

> From this you can produce an infinite number of solutions
> X = (AR)(R^TA^T)
> for any orthonormal matrix R (i.e. R^T = R^-1)
I saw from searching the web that R is 'the group O(n)'.
So I guess there is a systematic way to express the R matrix as a linear combination of 'the basis' for O(n), then you explicitly see the degree of freedoms for A.
You seem a bit confused here. A basis is associated with a vector space, not a group. I'm not sure what you are trying to say.

I saw from the web that for example in 2*2, one can use sin(x) and cos(x), to build rotation and reflection matrices.
True, and that is one way to construct orthonormal matrices.

In the originalo problem:
X = A A^T
Also solutions where A is of size n*m would be possible?
Then size of R has to be m*m (or maybe more general m*p?).

The anwer is obviously yes (think about the simplest case where X is a 1x1 matrix), but I wasn't thinking about that situation when I wrote the earlier post.
 
Last edited:

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 8 ·
Replies
8
Views
4K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 9 ·
Replies
9
Views
4K
  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 6 ·
Replies
6
Views
1K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 7 ·
Replies
7
Views
4K
  • · Replies 8 ·
Replies
8
Views
2K